Verify it's really you

Please re-enter your password to continue with this action.

Posts

GOATReads: Psychology

The Hybrid Tipping Zone

4 steps to widen the closing window of our cognitive independence. Imagine explaining to someone raised with smartphones what it felt like to be genuinely lost—that disorientation before GPS when you navigated using landmarks and intuition. That uncertainty, followed by the satisfaction of finding your way, represents something we're rapidly losing: the full spectrum of human cognitive experience. The Psychology of Convergent Change We're navigating the Hybrid Tipping Zone: As the last generation with lived experience of pre-AI decision-making, we possess episodic memory of unmediated thinking. These aren't nostalgic recollections but crucial psychological data points on human capability that younger generations may never acquire. This loss puts humanity at risk of agency decay—part of an "ABCD" of AI issues encompassing agency decay, bond erosion, climate conundrum, and division of society. The ABCD Framework: Mental Dynamics We've moved from AI experimentation to integration, opening the door to reliance—the last phase before addiction, when absence of digital crutches leads to paralysis. Agency Atrophy: Research on automation bias shows humans systematically over-rely on algorithmic recommendations, even when our judgment would be superior. This isn't laziness; it's predictable cognitive bias that intensifies with exposure. Frequent GPS users show measurable atrophy in hippocampal activity, the brain region responsible for spatial memory. We're literally watching our brains reduce internal capacity when external support is available. Social Scaffolding Dissolution: AI increasingly mediates human interaction, reducing opportunities for authentic social learning. Our capacity for empathy, emotional regulation, and conflict resolution develops through direct human contact. When algorithms curate our feeds and suggest responses, we lose essential social development opportunities. Environmental Disconnection: One Gemini query uses nearly 10 times the energy of a Google search, contributing to data center consumption that may double by 2026. Yet we don't "see" these consequences, creating psychological distance between our actions and planetary impact, making moral disengagement easy. Identity Fragmentation: When AI generates content indistinguishable from human creation, our sense of unique human value becomes confused. What happens to an artist's identity when ChatGPT generates in seconds what would take them weeks of study and effort—especially when the result is superior? The Focusing Illusion of AI Benefits The focusing illusion is our tendency to overweight factors capturing our attention while underestimating less visible effects. Our AI relationship exhibits this perfectly: Immediate convenience gains attention while gradual cognitive changes remain invisible. We notice that AI helps us write emails faster but miss that we're losing language construction facilities. We appreciate AI research assistance but don't recognize our declining information synthesis capacity. We enjoy AI-generated entertainment but overlook our reduced tolerance for ambiguity that characterizes authentic human creativity. Values-Driven Choices The path forward requires deliberate choices conditioned by intentional self-regulation—making choices aligned with long-term values rather than immediate impulses. Applied to AI, this means asking not just "What can this technology do?" but "What kind of person do I want to become through using this technology?" Beyond "What can AI do?" we must ask "What should I use AI for?" If we desire autonomous choice and human flourishing with planetary dignity, we must make present choices that set us on that trajectory. Enacting ProSocial AI: 4 Steps to Take Today ProSocial AI refers to systems tailored, trained, tested, and targeted to bring out the best in people and planet. At the personal level, four practical steps emerge: Think: Clarifying Your Aspirations. Begin with honest self-reflection about core values. What cognitive capacity do you want to maintain? Which aspects of thinking feel most authentically "you"? Create cognitive baselines by engaging in activities without AI assistance—writing longhand, navigating without GPS, solving problems through conversation rather than search. And practice metacognition: When using AI tools, observe your cognitive state before and after. Are you learning or outsourcing? Growing or atrophying? Talk: Authentic Human Connection. Engage conversations about AI's psychological impact, especially with young people who may never have experienced pre-AI cognition. Share your experiences of cognitive change candidly. When has AI enhanced your thinking? When has it made you feel less capable? Practice discussions that deliberately avoid AI assistance—no fact-checking via search, no AI-suggested responses, just human-to-human dialogue with all its uncertainty and discovery. Teach: Modeling Hybrid Intelligence Share pre-AI cognitive strategies with others, especially youth. Teach them to use physical maps, engage in sustained thinking without external input, and tolerate not immediately knowing answers. Model AI integration in ways that enhance rather than replace human creativity. Show how to use AI as a thinking partner rather than a substitute, maintaining agency while leveraging artificial capability. Transform: Making Meaningful Difference Apply insights within your sphere of influence. If you're a parent, create AI-free spaces for children's cognitive development. If you're an educator, design learning experiences that strengthen human capacities alongside AI literacy. In professional contexts, advocate for AI implementation that preserves human agency rather than simply maximizing efficiency. Choose personal AI usage patterns aligned with your values about human development—perhaps using AI for routine tasks while preserving human effort for activities that develop valued capabilities. Our Closing Window We occupy a unique historical position: We have cognitive leftovers from unmediated human intelligence. Our children will grow up in an AI-mediated world. Whether they develop robust human capabilities alongside artificial assistance or become dependent on technological thinking substitutes depends largely on choices we make now. The Hybrid Tipping Zone concerns the future of human consciousness. The question isn't whether AI will change how we think, but whether we'll be conscious participants in that change. We can still shape AI's role in human development—if we choose growth over convenience, agency over automation, and long-term flourishing over short-term efficiency. The cognitive patterns we establish now will become either the foundation for future human potential or the boundaries constraining it. Source of the article

Money for Nothing: Finance and the End of Culture

If you want to know about contemporary popular art (television, music, movies, perhaps a few books), you’ve got to know the names. I don’t mean the names of artists, writers, musicians, actors, or directors. I don’t mean the names of the big studios, publishers, and record companies either. I mean names behind the names: the names of equity, the money faucets, the money pools, the money men. It’s bleak fun to list them; money has so many names now. You might be familiar with a few of these: take Bain Capital, which gave us Mitt Romney in 2012 and the takeover of Warner Music in 2004. Or there’s the Carlyle Group, a nest for figures like George H. W. Bush. And you know BlackRock’s in the house. But there are many others, and often they have appellations from a Pynchon novel: Pershing Square, Archegos Capital Management, State Street, Vanguard, KKR, Colony Capital, Apollo Global Management, Silver Lake, AlpInvest, Saban Capital Group, Providence Equity Partners, Pluribus Capital, Red Zone Capital Management, RatPac-Dune Entertainment (that’s Steven Mnuchin, the Trump 1.0 cabinet secretary), the ominously named “Disney Accelerator.” This profusion might make it sound like the media and culture industries are heterogeneous, varied, and diverse, pumping out more innovative content every day—who can ever keep up with new shows?—but they aren’t. Instead, there’s a lot of capital collecting in a very few hands. That’s because the past several decades have seen rampant consolidation via mergers and acquisitions across creative fields, all of it backed by rivers of Wall Street equity. In visual media, for example, there are just five major players (Comcast, Disney, Sony, Paramount, and Warner Bros). The music industry, meanwhile, has the big three labels (Universal, Sony, Warner). Silicon Valley, of course, is up to its neck in media production and distribution via platforms like Hulu and Netflix, and has just five prime stakeholders (Meta, Alphabet/Google, Amazon, Apple, and Microsoft). The story of contemporary popular art, then, is centralization. That is, it’s the same pattern we have seen everywhere during what Robert Brenner calls “the long downturn”: the ongoing post-1973 period of declining industrial productivity, mushrooming debt, and—most important for our story—rising financialization of the economy, the turn toward the trading floors of London and Manhattan, with all the cultural inequality and decay this entails. The same class of people stripping journalism and higher education for parts, then, are also fully in control of the apparatus used to produce narrative and visual art. And, as Andrew deWaard, an assistant professor of media and popular culture at UC San Diego, shows in his shocking new study Derivative Media: How Wall Street Devours Culture, this same class of people are constructing “a more poisonous, extractive media system,” where “production cultures are increasingly constrained by extraction cultures.” The first half of Derivative Media historicizes and theorizes the rise of high-finance art, then the latter chapters turn to specific media properties. One of the most exciting elements of the book is how deWaard swings between a granular critique of texts and “distant,” quantitative, macro-scale research into vast corpuses of textual information collected in databases like Genius (for song lyrics) and IMDb (film and television). Using cataloging software, he examines vast swathes of lyric composition and intertextual reference for patterns in artistic production, in everything from Jay-Z tracks to 30 Rock episodes, “on a scale that would not be possible without computation” or “the database form” or rigorous “data visualization.” If you want charts of industry mergers, market trends, and the rate at which various types of cars get name-checked by rappers, deWaard has you covered, as he plots the connections between what a Marxist might call the economic base and the cultural superstructure. If you’d like to know how individual linguistic and aesthetic signifiers (Mercedes, Patrón, Star Wars, Apple, GE, McDonald’s) get deployed in specific texts, deWaard does criticism on that scale too. This is a scholar who personally catalogued all the references in 30 Rock, perhaps the most referential show ever made, then visualized his findings in dense, colorful graphics, all while listening to a lot of Billboard rap. DeWaard’s work provoked a lot of uneasy encounters between art I like and the realities of culture capitalism. Your favorite work might be stained with blood. It probably smells like cash. The result? On this and much else, deWaard is direct: “Power has been concentrated within financial institutions and is expressed using financial instruments and financial engineering strategies,” and “it is obscured behind byzantine shell corporations, complex mathematics, and an army of mostly men in expensive suits. It’s a convoluted story, but it can be told simply: the money pools in one location.” DeWaard doesn’t put it quite this way, but I would: What emerges from his book is a host of (yet more) reasons everyone should be a Marxist. Thus, Derivative Media is not about a lost golden age in Hollywood or music or whatever. DeWaard is clear about this: Under capitalism, these have always been for-profit ventures with industrial conditions. And whether you’re talking Nina Simone or The Matrix or all those Renaissance paintings the Medici family had done, art and commerce have always existed in what we might charitably call an uneasy tension. Indeed, for theorist Max Haiven, money and art form one of modernity’s core dialectics. They are not “mutually opposed mythical forces” but “mutually encrypting structures”: “Art,” Haiven claims in Art After Money, Money After Art, “cannot be corrupted by capitalism because it has always already been derivative of capitalism.” Yet even he concedes that “art is never completely or fully incorporated: what allows it to generate its saleable contemporaneity is precisely the small latitude of incomplete freedom, obstreperousness, antagonism, and radicality it is afforded.” The key word here being “saleable”—even radical work can be ingested by capital and used to comfort or enrich “people who,” deWaard observes, “treat culture as just another input in their cash-flow-extraction strategies.” And, as he tells it, capital has figured out new ways to penetrate artistic production and to metastasize through the circuits of distribution and consumption of artworks: fracturing once-coherent human systems in order to wring out new profits. Everyone’s brain gets melted. Critique dies. Numbed consumption wins. We pay good money for this. The culture industries have been thoroughly colonized by complex financial players (mainly hedge funds and private-equity groups) and their warped logics of sociality. Wall Street has the copyrights, the masters, the deeds. What might seem like a firehose of new art—fresh indie artists appear on Spotify all the time, A24 distributes a lot of movies, the New York Times still has book lists—is, in reality, “a Russian nesting doll of conglomeration and investment.” It gets worse. Beyond the sectoral infestation of artistic fields by capital, the takeover of film production and song catalogs and radio stations, Wall Street has found ways to financialize art at the constitutive level of the text itself. It’s not just the industries that are financialized; works qua works are themselves securitized investment vehicles, too. In a song or show, for example, every nod to alcohol or cars can lead to further monetization opportunities. Any memorable words, ironic references, witty allusions, striking images, and catchy songs can all be broken up, “securitized,” and redistributed, across platforms owned by a few people. Indeed, for the masters of the universe, there is no art, only content, only intellectual property. DeWaard argues that “financialized texts become sites of capital formation” alone, and that “culture has a subservient role in the financial system, which sees it as merely another numerical value to trade. The stock exchange,” he continues, “has been embedded within the media text.” The result? Mostly “flooding the zone with shit,” with “lots of content, but little creativity or criticism.” Thus, both the culture industries and the artworks they disgorge are “derivative,” in the double sense. On the one hand, they are literally financialized; on the other, they are increasingly boring, toothless, and antiradical. There is no such thing as a “naturally occurring media economy,” deWaard contends. “There is only political economy, a system of social relations constituted through law and institutional behaviors, one that is currently arranged hierarchically and could just as easily be arranged differently. The one we have is driven by power, not exchange of goods and services.” In other words, to understand art, you must talk power, which means talking money. It is tempting to narrate this as just another branch in the gruesome story of neoliberalism, the post-’70s marketization and privatization of the world. And to be fair, deregulatory monsters like Reagan and Clinton are part of this story. But by drawing on a tradition of Marxist historical scholarship, deWaard asks us to think more widely of capitalism’s “longue durée” over the past 600 years, where, if we look right, we see the cyclical reemergence of finance capital. Again and again, capitalist systems mature, wither, and morph into new structures and nodes of power. It happened to Renaissance Italy, to the Dutch trading empire (think of those Old Masters), to imperial Britain, and now to the United States, with its loosening grip on hegemony. DeWaard concurs with historian Samuel Chambers, quoting the latter’s contention that “there is no such thing as ‘the economy’” in the sense of a naturally occurring free market, only “an overlapping, uneven, discontinuous, and non-bounded domain” that is simultaneously political, financial, social, and cultural.1 This extends to the media economy, which deWaard argues is not a balanced system of supply and demand where artists satisfy the existential needs of consumer-customers who “decide what is popular.” Thanks to deregulation of the finance industry in the 1980s and ’90s, hedge funds and private-equity groups and investment banks were suddenly off the leash and eager to direct “the economy” to where they could accumulate the most.2 Finance, deWaard says, is a machine controlled by the powerful; it changes and directs economic trends and “is not a picture or representation of some external phenomenon we call the marketplace; rather, finance has become the powerful engine that drives the marketplace in certain directions. The destination is power, wealth, and inequality.” Thus, rather than a seemingly organic creation and exchange of goods, what we see everywhere is simply the flexing of elite power: when corporations offer shareholder dividends and stock buybacks; when CEO compensation balloons; when mergers permit “cartel-like behavior” by too-big-to-fail conglomerates; when musicians get paid pennies for streams, while consumers are trapped within a “rentier logic” that privileges access over ownership; when even our best art house films are funded by capricious billionaires; when hedge funds proliferate (11,000 operating today) with “no incentive to produce value, only extract it,” capturing Rolling Stone and Artforum along the way; when private-equity vultures leverage debt to buy companies and then strip the copper from their walls, leaving “bankruptcies, layoffs, and unpaid bills.” All of these instances of elite power deWaard labels “vehicles for upward redistribution,” and he goes on to quote Henry Hill’s Mafia credo from Goodfellas (1990): “Fuck you, pay me.” Artists, musicians, actors, writers, and other media creatives, along with their audiences, get fucked. Financiers get paid. Wall Street is versed in literary theory, even if this “cultural cartel enacting mass theft of creativity” doesn’t know it, or care. To begin: Financialization entails abstracting away from the attributes of an existing thing: as deWaard has it, “derivatives are an instrument to hedge or speculate on risk, basically a wager on the fluctuation of the cost of money, currencies, assets, or the relationships among them … Their value is derived from the performance of an underlying entity” (original italics) that is not itself traded. Further, this “logic of fluid conversion,” whereby every coherent asset can be abstracted and “unbundled” into fungible instruments, is “a natural fit for transnational media conglomerates with holdings in film, television, music, the popular press, video games, online media, theme parks, and other cultural properties.” Artworks are not cohesive objects but rather collections of glittering fragments that can be packaged, traded, resold. In 1980, around the same time the US economy was getting financialized, the literary critic Julia Kristeva theorized “intertextuality,” the concept that no artistic text is legible as a singular bounded item, but is rather always “a mosaic of quotations” from other texts, contexts, and viewers/readers. In deWaard’s telling there is now “a concrete bankability to the once-radical concept.” When art becomes IP, each privately owned and “radically open text offers vast intertextual and intermedial opportunities for potential profit,” and his subsequent description of Kristeva’s nightmare is worth quoting at length: Derivative media operationalizes intertextuality. On one end of the spectrum, figurative devices such as allusion, parody, satire, and homage create constellations of textual reference and influence; on the other, commercial devices such as product placement, brand integration, branded entertainment, and native advertising deliver consumer influence. The latter typically involves a direct transfer of money, while the former often enacts an indirect exchange of cultural capital. The key to this exchange is the interplay between these two forms of “derivation,” the textual and the financial. In this “interconnected referential economy” every text has been “internally financialized,” and there is no outside we can escape to: “From the content of the securitized cultural text, to the fragmented audience that engages with it, to the precarious labor that produces it, to the overpaid management that organizes it, to the networks that circulate it, to the indebted corporations that catalog it, to the systems of accumulation that facilitate it—financial capital now fuels the pop-music hit machine and the Hollywood dream factory.” Therefore, in this new gilded age, the same apparatus flooding the market with AI slop and Marvel sequels and Frasier reboots is simultaneously impoverishing artists. Jay-Z put it memorably: “I’m not a businessman, / I’m a business, man.” Much as Jay-Z prides himself on artistry, all that lyrical firepower on display in albums like Reasonable Doubt (1996) and The Blueprint (2001) was for money, not laurels. “Hip-hop,” argues the critic Greg Tate, “is the perverse logic of capitalism pursued by an artform.” DeWaard concurs, and Derivative Media maps out a “lyrical marketplace” that “merges the formal and the financial,” because hip-hop was born in the 1970s, just like this round of financialization. As such, “its form, style, and structure have come to explicitly exhibit properties of its economic context … hip hop is not just subject to business processes, it is itself consciously a business process.” MCs are “musician-speculators” whose every flow and word are embedded in a system where “lyrics are rendered fungible assets and securitized into a speculative instrument.” Many rappers like to boast about expensive vehicles and top-shelf booze, and deWaard uses computational analysis of Genuis’s lyric archive to plot patterns. For example, Cristal fell off after the 1990s, while Patrón had a big moment in the recession aughts; Mercedes, meanwhile, has been a constant since the Clinton era. DeWaard notes economic connections like Courvoisier sales growing after Busta Rhymes’s 2002 single “Pass the Courvoisier Part II.” But he really zeroes in on Jay-Z’s relationship to Armand de Brignac champagne, which the mogul rechristened “Ace of Spades” in the late 2000s, shortly after investing in and just before becoming a majority owner of the brand. Seemingly organic references (“I used to drink Cristal, the muh’fucker’s racist / So I switched gold bottles on to that Spade shit”) are actually market moves. In derivative modernity every text is a package of assets. The poet as petit-bourgeois business owner or, if she dreams big, a mogul. But you can pack even more intertextuality into an episode of TV, and deWaard turns to 30 Rock: the pinnacle of the self-aware, allusive style of postmodern comedy that you also see in The Simpsons, South Park, and Family Guy. In what deWaard calls “securitized sitcoms”—where every joke about a real or fake brand is a chance to monetize and extend value—“the full extent of the fiscal exchange is concealed, but with added formal mechanisms such as parody, satire, and irony used as camouflage.” On financialized TV, “referential jokes as a form are rendered a potential asset class”: far less crude than midcentury product placement, far slyer and more sophisticated, congratulating the viewer on getting the joke, which is itself “a comedic shroud for the constant onslaught of brands and corporate texts.” This is particularly true of Tina Fey’s creation, the in-house jester of NBCUniversal, which besides being brilliantly funny is an extended rumination on corporatized media. DeWaard calls it “industrial self-theorizing both ironic and lucrative.” In other words, the rapid-fire, poly-referential writing on 30 Rock is tongue-in-cheek humor that also pays well, that both “satisfies and subverts a corporate mandate” while congratulating the viewer’s intelligence. Make a satirical reference to Bed Bath & Beyond or Star Wars or Siri? That’s still intertextual placement, woven into the cultural object at the level of plot. Mock NBC, the company that employs everyone on 30 Rock? No problem. It all pays dividends for NBCUniversal’s parent company, Comcast. Even the sharpest jokes on the show are ultimately situated within, not against, the status quo. Like The Simpsons, The Office, or Peep Show, 30 Rock is fundamentally conservative, in the sense of suggesting very little that would threaten the accumulation of capital or the current hierarchies of American society. Liz Lemon is awful, but she’s relatable, and nothing in the dense texture of incredibly good jokes on the show threatens anyone’s ability to make money in the real world. When she finds the perfect pair of jeans at a hip Brooklyn spot that regrettably turns out to be owned by a Halliburton slave-labor scheme, the joke lands. We get it: Halliburton, which helped loot and destroy Iraq in the 2000s (this episode aired in 2010), is evil. But nothing further. Congratulations. We are doing the standard bourgeois move of bearing witness to something terrible without a hint of how to change the conditions that produce it, only through the prism of genius joke writing. It becomes a little sickening after a while, because “the winking is constant and the nudging becomes a sharp elbow.” No show told jokes faster or packed more references into its diegetic universe, and after a certain point it’s like being with a very witty person, which is to say exhausting. Indeed, the show’s universe is so parodic, so reflexive, so referential with respect to the detritus of the capitalism, that narrative-aesthetic interpretation isn’t sufficient—again, we must think like Wall Street. “Textual analysis typically involves asking questions about a text’s form, composition, and style,” deWaard points out. But “increasingly, that means asking: Was this formal component for sale? Might it be for sale in the future? What are the market relations and pricing mechanisms among these components”? There is no mise-en-scène, only what he calls “mise-en-synergy” (synergy being a favorite term of cynical managers like Liz’s boss, Jack): “the multi-platform relationship between audiovisual style, meaning, and economics.” Don’t worry if you feel queasy—you still get the joke, like the other discerning consumers. So if even a celebrated satire is domesticated and toothless—just a high-grade marketing ploy with jokes—where can one turn for radical art? You might think highbrow cinema would be the place to go, the stuff that isn’t Marvel Extended Universe sludge: the work of the Martin Scorseses and Greta Gerwigs and Bong Joon-hos and Alfonso Cuaróns of the world. Yet finance rules here, too. The overwhelming majority of art house production and distribution companies are what deWaard calls “billionaire boutiques” funded by the largesse of a “plutocratic patron,” often the child of someone who made their money screwing over others. There’s Big Beach (Little Miss Sunshine, Away We Go, Our Idiot Brother, The Farewell, and others), founded with a fortune Marc Turtletaub’s father, Alan, made giving out subprime mortgages prior to 2008. There’s Oracle magnate Larry Ellison’s child Megan, who runs Annapurna Pictures (Zero Dark Thirty, Phantom Thread, Her, If Beale Street Could Talk). And A24, distributing glossy, sensitive films (Ex Machina, Lady Bird, Hereditary, Uncut Gems, The Lobster)? Unfortunately, the company is a direct outgrowth of the Guggenheim mining fortune, built in the pits and quarries of the global South, and it is now run through the asset-management firm Guggenheim Partners, where A24’s cofounder Daniel Katz was once the head of film finance. Most of the time, art house film is just reputation laundering or “corrupt philanthropy” or both. For the ownership class, nothing is fundamentally about art. Good luck getting radical art past these gatekeepers. Challenges to the artistic, cultural, or political status quo are defanged in financialized Hollywood, since that status quo benefits Hollywood’s own elite. Even at the boutique houses, “social justice ideals are compromised, neutralized, and suppressed within the framework of plutocratic patronage,” and even the edgier films offer only “low-level, technocratic fixes” or appeals to individual nobility. Your film had better not suggest systemic change or critique capitalism. Thus we get beautiful, mournful, liberal (at best) art—“a sort of calculating complicity”—and “the overall picture is one of mountains of wealth casting a shadow on arthouse theaters playing esoteric indie films.” And if one of those little movies does well financially, maybe you have just found your next Marvel director: Indies serve as “the research and development wing of Hollywood, as many of these directors are subsumed into blockbuster film and television.” Art continues to get made—that’s what human beings do—but capital devours it. For me, the grimmest part of Derivative Media is the fact that deWaard isn’t just writing about trash. It would be one thing to take aim at reality TV, which everyone knows sucks; but 30 Rock is one of the best sitcoms ever made. There’s plenty of chart garbage in modern music; but at his peak Jay-Z was one of the greatest MCs who ever lived. A24 distributes a lot of incredible films that are the opposite of superhero schlock; it’s just all backed by the toil of workers in the global South. The point of a blockbuster film isn’t cinema per se, profitable as these movies are. It’s to branch into other monetizable media, especially video-game voids that “attempt to build unique universes in which a broad range of [a company’s] intellectual property is not just exploited strategically, but offered in a more immersive manner.” I certainly experienced a dark night of the soul reading the list of Disney game properties: You’ve got Kingdom Hearts, the epochal multi-character hit spread over 13 editions, but also Disney Infinity, Disney Princess, Disney Magical World, Disney Dreamlight Valley, Disney Friends, Disney Ultimate, Disney Art Academy, Disney Learning, Epic Mickey, Disney Sing It, Dance Dance Revolution Disney Mix, Disney Twisted-Wonderland, Disney Magic Kingdoms, Disney Emoji Blitz, Disney Heroes: Battle Mode, Disney Fantasy Online, and the Disney Mirrorverse. It’s all right there on the internet; your students and children are probably playing these, each gaming universe a money and data hole, seductive as any drug from science fiction. In Immediacy, or, The Style of Too Late Capitalism (2024), Marxist critic Anna Kornbluh identifies “a master category for making sense of twenty-first-century cultural production” that is narrative or quasi-narrative, everything from fiction to theory to film to television: She calls it “immediacy.” Under a planetary regime premised on speed, instantaneity, urgent circulation, and fluid logistics (a “petrodepression hellscape”), art comes to resemble the economy: intense, immersive, fast, liquid, a “pulsing effulgence [that] purveys itself as spontaneous and free, pure vibe.” Art has a harder time with representation, intersubjectivity, duration, and critical thought (and audiences come to desire less of these). Kornbluh writes, “Fluid, smooth, fast circulation, whether of oil or information, fuels contemporary capitalism’s extremization of its eternal systemic pretext: things are produced for the purpose of being exchanged; money supersedes its role as mediator of exchange to become its end point.” And like deWaard, Kornbluh emphasizes the decay of narrative art forms, their general turn toward a slop of sequels, prequels, reboots, franchises. “What matters in [such] a universe is its endless replication,” she argues, “the distention without innovation of new ideas, new characters, new universes. And by ‘matters,’ one means ‘sells’: the twenty highest-grossing Hollywood films since 2010 were all sequels, eighteen of them issued by Disney” (original italics). And this makes sense, for though deWaard targets good art, he certainly doesn’t ignore trash altogether. After all, the ultimate finance pipe dream is what he calls the “brandscape blockbuster,” the IP or “metaverse” movie, “derivative media at the scale of the world.” All this starts in the late Reagan era with a text I love, Who Framed Roger Rabbit (1988), a film that, thanks to Steven Spielberg’s many phone calls, blends 70 references from works belonging to multiple companies, including Disney, Turner, Universal, and Warner Bros. Mickey Mouse and Bugs Bunny even appear together for the only time in big-screen history. Imagine the possibilities! In fact, plenty of people did. Sharks noticed that Roger Rabbit made over $350,000,000 at the box office. Subsequent, fully articulated IP franchises—which bleed from screen to screen, platform to platform, device to device—are the ultimate in the capitalist enclosure of media, which duplicates the real-world destruction of “anything resembling public or communal space that isn’t monetizable.” “With strategic licensing agreements and merchandising deals,” deWaard writes, “these brandscape blockbusters seek to develop a fantasyland made in the image of the financialized marketplace, reflecting our dystopian reality back to us as a playful fantasy.” Texts like Wreck-It Ralph, Avengers, and The LEGO Movie appeal to kids, of course, but there’s also “a drip feed of dopamine for older viewers playing spot-the-reference.” It’s democratic: Everyone’s brain gets melted. Critique dies. Numbed consumption wins. We pay good money for this. Reading Derivative Media’s account of the popular-art industries, now subject to finance instead of Fordism, I kept thinking about T. S. Eliot’s “The Waste Land,” which had its centenary three years ago. Part of this pertains to Eliot’s subject matter: Published in 1922 amid endless wars and just after a brutal pandemic, his modernist assemblage of allusions and texts imagines the West as a hellish necropolis, where civilization lies in fragments: “Unreal city, / Under the brown fog of a winter dawn, / A crowd flowed over London Bridge, so many, / I had not thought death had undone so many.” No need to get into Eliot’s royalist / Anglo-Catholic politics or to unpack all the allusions in just this selection (Baudelaire and Dante, basically), let alone the whole 434-line poem. What matters, here, is that his vision of the world resonates a century later, as we look down the barrel of hot-planet fascism, while flowing crowds stare at their phones. Content aside, the form of “The Waste Land” matters too. For Eliot and other modernists, narrative as well as visual and lyric form could be fractured, rearranged, requoted, rendered multilateral and nonchronological. A million monographs have been written about this, and people go on arguing about whether it even makes sense to separate ironic modernist intertextuality from ironic postmodern collage. Eliot and David Foster Wallace both used endnotes. Anyway, Webster’s defines modernism as. … There’s a rotten irony here, though: The reference farmers at Roc-A-Fella Records and 30 Rockefeller Plaza and “song management” firms like Hipgnosis are extractive parodists of Eliot (who also had a day job as a banker). For his part, though, the poet hoped to save something meaningful from the wreckage of war and illness: to shore some fragments against his ruin. Today, it’s about capital now, real money: the stuff that you can spend or hoard or both. Eliot saw shards and pulled some together, because he felt spiritually compelled to. Financiers see fragments and yearn to mix them into new markets, pricing them ever more dearly. Thus, we live inside Kristeva’s and Eliot’s black mirror, this extractive network of knowing allusions and winking irony, culture articulated not by melancholy artists but Wall Street ghouls. The landscape of social experience remains as atomized and alienating in 2025 as it was in 1922, with some new aesthetics and technologies, but with the same monster at its back (imperial capitalism). If anything, the post-1970s reemergence of high finance puts Eliot’s grim view of the West—we’d now say the global North—on steroids. Now, the broken landscape is securitized; all the pieces have tradeable value, at least for a few people. We endure the same existential problems, plus the specter of biospheric collapse, plus capital colonizing more of interior and expressive life. In Civilization and Capitalism, Fernand Braudel calls the financialized stage of capitalism “a sign of autumn.” Maybe Eliot just got the date early. Then again, seven years after he published “The Waste Land,” the world economy evaporated. Mixing aesthetic and political-economic critique, deWaard’s work is a mind-bending contribution to whatever is left of public-humanities criticism. He emphasizes that “one of the fundamental questions of this project is to ask what autumnal culture looks like,” and concludes that “it looks a lot like hip hop, reflexive comedy, and branded blockbusters: texts that are entrepreneurial, speculative, and, above all, derivative.” More ominously, his conclusions are relevant to fields of cultural production, distribution, and consumption that he doesn’t have space for: Why is short, anti-intellectual Insta poetry most marketable now? Don’t ask critics or poets—ask Wall Street and what Kornbluh calls “algorithmic culture.” Kay Ryan is still alive, but most new readers prefer Rupi Kaur. “In the extremity of too late capitalism,” Kornbluh observes, “distance evaporates, thought ebbs, intensity gulps. Whatever. Like the meme says: get in, loser.” Both the culture industries and the artworks they disgorge are “derivative,” in the double sense. On the one hand, they are literally financialized; on the other, they are increasingly boring, toothless, and antiradical. Landlords are in control, breaking everything to bits and renting back to us at higher prices. And it is all worth a tremendous amount of money, at least to a few people, who are willing to kill the rest of us—and our cultures—to keep it. Derivative Media concludes with a bold-faced set of pragmatic, social-democratic ways to break the grip of finance. We could, for example, tax billionaires more, or fight like hell for unionization, or close the “carried interest” loophole that only benefits hedge-fund and private-equity managers, or actually enforce antitrust legislation that is already on the books. (Indeed, under the Biden administration, Lina Khan was doing that at the Federal Trade Commission.) We could, deWaard writes, have “a less capitalist, more democratic organization of society [that] could be modeled in how we collectively allocate culture, in both how we access media and the labor that goes into making it.” Of course, if we had the ability—as a politically functional society—to enact such reforms, we probably wouldn’t need them in the first place, and with Donald Trump resuming control of the White House, people like Lina Khan and possibilities like progressive tax reform are gone. What deWaard tentatively envisions will not happen. Things are probably going to get worse in art and media, because the python grip of capital is only getting stronger. We have what Kornbluh calls the “recycling of sopping content” to look forward to, with a few brilliant works financed by billionaires in the mix for the awards cycles. Fuck you, pay me. So, what’s on tonight? Source of the article

GOATReads: History

The World Trade Center, by the Numbers

From the foundation to the elevators, everything about the Twin Towers was supersized. When the World Trade Center’s Twin Towers opened to the public in 1973, they were the tallest buildings in the world. Even before they became iconic features of the New York City skyline, they reflected America’s soaring ambition, innovation and technological prowess. The towers' eye-popping statistics amply illustrate that ambition: They rose a quarter-mile in the sky. They contained 15 miles of elevator shafts and nearly 44,000 windows—which took 20 days to wash. From the South Tower observation deck on a clear day, visitors could see 45 miles. The Trade Center complex was so big, it had its own zip code. But some of the same impressive architectural elements may have also helped worsen the tragedy on the fateful morning of September 11, 2001. Calling the project “the architecture of power,” Ada Louise Huxtable, an architecture critic for The New York Times offered a prescient warning when the towers were going up in 1966: “The trade-center towers could be the start of a new skyscraper age or the biggest tombstones in the world,” she wrote. These facts and figures offer some perspective on the engineering and architectural feats that made the Twin Towers possible. Time to build: 14 years (from formal proposal to finish) David Rockefeller, grandson of the first billionaire in the U.S., had the idea to build a World Trade Center in the port district in Lower Manhattan in the 1950s. By 1960, city, state and business leaders came on board. The Port Authority of New York and New Jersey presented a formal proposal to the two states’ governors in 1961, then hired an architect and cleared 14 blocks of the city’s historic grid. They broke ground in 1966. Two or three stories went up weekly. The towers used 200,000 tons of steel and, according to the 9/11 Memorial & Museum, enough concrete to run a sidewalk between New York City and Washington, D.C. The ambitious project overcame community opposition, design and construction setbacks, attempted sabotage by New York real estate rivals and major engineering challenges to open its doors in April 1973 while still under construction. The towers were completed in 1975. Number of architectural design drafts: 105 After creating more than 100 design ideas with various combinations of buildings, architect Minoru Yamasaki’s team settled on a seven-building complex with a centerpiece of two identical 110-story towers. The towers' design featured a distinctive steel-cage exterior consisting of 59 precise, narrowly spaced slender steel columns per side. Cost to build: more than $1 billion According to The New York Times, the cost of building the towers ballooned to more than $1 billion, far beyond its original budget of $280 million. Project managers faced cost overruns as safety, wind and fire tests were conducted. And engineers embraced or created innovative construction techniques and new technologies to make the towers lighter and taller. Rentable floor space: about one acre per floor The Twin Towers’ innovative design, which placed structural load on the outside columns rather than inside pillars, facilitated the owners’ desire for a maximum amount of rentable space. With 10 million square feet of office space—more than Houston, Detroit or downtown Los Angeles had at the time, according to The New York Times—the World Trade Center came to be dubbed “a city within a city.” Depth of the Twin Towers’ foundation: 70 feet To build such tall towers on landfill that had piled up onto Lower Manhattan for centuries, the towers needed exceedingly strong foundations. So engineers dug a huge rectangular hole seven stories down into the soft soil to reach bedrock. Using a technique developed by Italian builders in the 1940s, the towers’ builders used slurry, a mud-type material lighter than soil, to dig a 70-foot-deep trench and keep the surrounding soil from collapsing as they poured in concrete to form three-foot-thick walls, like a waterproof “bathtub.” But it worked like a bathtub in reverse. It didn’t keep water in, but rather kept water from the Hudson River out—and away from the Trade Center complex. On 9/11, the crashing debris damaged the walls, but they mostly held up. If they hadn’t, engineers fear the Hudson River would have flooded the city’s subway system and drowned thousands of commuters. Extra land created by building the WTC: 23 acres The 1.2 million cubic yards of soil dug up to build the “bathtub” were used to add 23 acres to Lower Manhattan—about a quarter of the area of a planned community of parks, apartment buildings, stores and restaurants nearby called Battery Park City that lines the Hudson River. Twin Towers' elevator speed: 1,600 feet per minute The Twin Towers had 198 elevators operating inside 15 miles of elevator shafts, and when they were installed, their motors were the largest in the world. The towers’ innovative elevator design mimicked the New York City subway, with express and local conveyances. That innovation lessened the amount of space the elevators took, leaving more rentable floor space. On 9/11, the tower’s elevator shafts became an efficient conduit for airplane fuel—and deadly fire. Windspeed the towers could sustain: 80 m.p.h. Engineers concluded in wind tunnel tests in 1964 that the towers could sustain a thrashing of 80-m.p.h. winds, the equivalent of a category 1-force hurricane. With this study, one of the first of its kind for a skyscraper, engineers tested how the towers’ innovative tubular structural design, lighter than the traditional masonry construction, would handle strong winds. But they also realized that in the winds coming off the harbor, the towers could sway as much as 10 feet, making office space potentially tough to rent. So the chief engineers developed viscoelastic dampers as part of the towers’ structural design. Some 11,000 of these shock absorbers were installed in each tower, diminishing the sway to about 12 inches side to side on windy days, according to the 9/11 Memorial & Museum. Number of sprinklers in the towers:  3,700 Two months after the release of the blockbuster movie The Towering Inferno, a three-alarm blaze in the North Tower in 1975 raised concerns that the Twin Towers had no sprinklers. That was common for skyscrapers at the time, and the Port Authority of New York and New Jersey, which owned the buildings, was exempt from the city’s fire safety codes. But facing pressure from state lawmakers and employees in the Center, Port Authority officials spent $45 million to install some 3,700 sprinklers in the two buildings during the 1980s. But the sprinklers failed when they were needed the most. On 9/11, the attacking planes snapped the water intake system upon impact, so they didn’t work. Height of the tightrope walk between the towers: 1,350 feet On the morning of August 7, 1974, French acrobat Philippe Petit walked the more than 130 feet between the Twin Towers on a high wire approximately one-quarter mile up in the air. Thousands of commuters stared up, gasping in amazement. Exuding confidence in his 45-minute show, the tightrope artist laid down on the wire, knelt down on one knee, talked to seagulls and teased police officers waiting to arrest him. Using his 50-pound, 26-foot-long balancing pole, he crossed between the tallest buildings in the world eight times before stopping when it started to rain. Initially critiqued as a “white elephant,” the new towers had difficulty attracting tenants in the early years. Petit’s show, followed by a skydiver jumping off the North Tower and a toymaker climbing up the wall of the South Tower, began to turn that around, making the towers seem more human in scale and more accessible to New Yorkers and tourists. Force of tremor when the towers fell: akin to 2.1 and 2.3 earthquakes On September 11, 2001, seismologists in 13 stations in five states—including the furthest in Lisbon, New Hampshire 266 miles away—found that the collapse of the South Tower at 9:59 a.m. generated a tremor comparable to that of a small earthquake registering 2.1 on the Richter scale. Measurements for the North Tower collapse half an hour later: 2.5 on the Richter scale. Source of the article

The Big Bang’s big gaps

The current theory for the origin of the Universe is remarkably successful yet full of explanatory holes. Expect surprises Did the Universe have a beginning? Will it eventually come to an end? How did the Universe evolve into what we can see today: a ‘cosmic web’ of stars, galaxies, planets and, at least on one pale blue planet, what sometimes passes for intelligent life? Not so very long ago, these kinds of existential questions were judged to be without scientific answers. Yet scientists have found some answers, through more than a century of astronomical observations and theoretical developments that have been woven together to give us the Big Bang theory of cosmology. This extraordinary theory is supported by a wide range of astronomical evidence, is broadly accepted by the scientific community, and has (a least by name) become embedded in popular culture. We shouldn’t get too comfortable. Although it tells an altogether remarkable story, the current Big Bang theory leaves us with many unsatisfactorily unanswered questions, and recent astronomical observations threaten to undermine it completely. The Big Bang theory may very soon be in crisis. To understand why, it helps to appreciate that there is much more to the theory than the Big Bang itself. That the Universe must have had a historical beginning was an inevitable consequence of concluding that the space in it is expanding. In 1929, observations of distant galaxies by the American astronomer Edwin Hubble and his assistant Milton Humason had produced a remarkable result. The overwhelming majority of the galaxies they had studied are moving away from us, at speeds directly proportional to their distances. To get some sense of these speeds, imagine planet Earth making its annual pilgrimage around the Sun at a sedate orbital speed of about 30 kilometres per second. Hubble and Humason found galaxies moving away at tens of thousands of kilometres per second, representing significant fractions of the speed of light. Hubble’s speed-distance relation had been anticipated by the Belgian theorist Georges Lemaître a few years before and is today known as the Hubble-Lemaître law. The constant of proportionality between speed and distance is the Hubble constant, a measure of the rate at which the Universe is expanding. In truth, the galaxies are not actually moving away at such high speeds, and Earth occupies no special place at the centre of the Universe. The galaxies are being carried away by the expansion of the space that lies between us, much as two points drawn on a deflated balloon will move apart as the balloon is inflated. In a universe in which space is expanding, everything is being carried away from everything else. The Big Bang story is almost as fascinating as the story of the Universe itself To get a handle on the distances of these galaxies, astronomers made use of so-called Cepheid variable stars as ‘standard candles’, cosmic lighthouses flashing on and off in the darkness that can tell us how far away they are. But in the late-1920s, these touchstone stars were poorly understood and the distances derived from them were greatly underestimated, leading scientists to overestimate the Hubble constant and the rate of expansion. It took astronomers 70 years to sort this out. But such problems were irrelevant to the principal conclusion. If space in the Universe is expanding, then extrapolation backwards in time using known physical laws and principles suggests there must have been a moment when the Universe was compressed to a point of extraordinarily high density and temperature, representing the fiery origin of everything: space, time, matter, and radiation. As far as we can tell, this occurred nearly 14 billion years ago. In a BBC radio programme broadcast in 1949, the maverick British astronomer Fred Hoyle called this the ‘Big Bang’ theory. The name stuck. Of course, it’s not enough that the theory simply tells us when things got started. We demand more. We also expect the Big Bang theory to tell the story of our universe, to describe how the Universe evolved from its beginning, and how it came to grow into the cosmic web of stars and galaxies we see today. The theorists reduced this to a simple existential question: Why do stars and galaxies exist? To give a proper account, the Big Bang theory has itself evolved from its not-so-humble beginnings, picking up much-needed additional ingredients along the way, in a story almost as fascinating as the story of the Universe itself. The Big Bang theory is a theory of physical cosmology, constructed on foundations derived from solutions of Albert Einstein’s equations of general relativity – in essence, Einstein’s theory of gravity – applied to the whole universe. Einstein himself had set this ball rolling in 1917. At that time, he chose to fudge his own equations to obtain a solution describing an intellectually satisfying static, eternal universe. Ten years later, Lemaître rediscovered an alternative solution describing an expanding universe. Although Einstein rejected this as ‘quite abominable’, when confronted by the evidence presented by Hubble and Humason, he eventually recanted. Working together with the Dutch theorist Willem de Sitter, in 1932 Einstein presented a new formulation of his theory. In the Einstein-de Sitter universe, space is expanding, and the Universe is assumed to contain just enough matter to apply a gentle gravitational brake, ensuring that the expansion slows and eventually ceases after an infinite amount of time, or so far into the future as to be of no concern to us now. This ‘critical’ density of matter also ensures that space is ‘flat’ or Euclidean, which means that our familiar schoolroom geometry prevails: parallel lines never cross and the angles of a triangle add up to 180 degrees. Think of this another way. The critical density delivers a ‘Goldilocks’ universe, one that will eventually be just right for human habitation. By definition, the Einstein-de Sitter expanding universe is an early version of the Big Bang theory. It formed the basis for cosmological research for many decades. But problems begin as soon as we try to use the Einstein-de Sitter version to tell the story of our own universe. It just doesn’t work. If the post-Big Bang universe had expanded just a fraction faster or slower, stars and galaxies would not have formed Applying Einstein’s equations requires making a few assumptions. One of these, called the cosmological principle, assumes that on a large scale the Universe is homogeneous (the same everywhere) and isotropic (uniform in all directions). But if this were true of our universe in its very earliest moments following the Big Bang, matter would have been spread uniformly in all directions. This is a problem because if gravity pulled equally on all matter in all directions, then nothing would move and so no stars or galaxies could form. What the early universe needed was a little anisotropy, a sprinkling of regions of excess matter that would serve as cosmic ‘seeds’ for the formation of stars and galaxies. Such anisotropy could not be found in the Einstein-de Sitter universe. So where had it come from? Matters quickly got worse. Theorists realised that getting to the Universe we see from the Big Bang of the Einstein-de Sitter version demanded an extraordinary fine-tuning. If the immediate, post-Big Bang universe had expanded just a fraction faster or slower, then stars and galaxies would have never had a chance to form. This fine-tuning was traced to the critical or ‘Goldilocks’ density of matter. Deviations from the critical density of just one in 100 trillionth – higher or lower – would have delivered universes very different from our own, in which there would be no intelligent life to bear witness. It got worse: theoretical studies of the formation of spiral galaxies and observational studies of the rotational motions of their stars led to another distinctly uncomfortable conclusion. Neither could be explained by taking account of all the matter that we can see. Calculations based only on the visible matter of stars suggested that, even if conditions allowing their formation could be met, spiral galaxies should still be physically impossible, and the patterns of rotation of the stars within them should look very different. To add insult to injury, when astronomers added up all the matter that could be identified in all the visible stars and galaxies, they found only about 5 per cent of the matter required for the critical density. Where was the rest of the Universe? There was clearly more to our universe than could be found in the Einstein-de Sitter version of the Big Bang theory. The solutions to some of these problems could be found only by looking back to the very beginning of the story of the Universe and, as this is not a moment that is accessible to astronomers, it fell once more to the theorists to figure out what might have happened. In the early 1980s, a group of theorists concluded that, in its very earliest instants, the post-Big Bang universe would have been small enough to be subject to random quantum fluctuations – temporary changes in the amount of energy present at specific locations in space, governed by Werner Heisenberg’s uncertainty principle. These fluctuations created tiny concentrations of excess matter in some places, leaving voids in others. These anisotropies would have then been imprinted on the larger universe by an insane burst of exponential expansion called cosmic inflation. In this way, the tiny concentrations of matter would grow to act as seeds from which stars and galaxies would later spring. To a certain extent, cosmic inflation also fixed some aspects of the fine-tuning problem. It was like a blunt instrument: no matter what conditions might have prevailed at the very beginning, cosmic inflation would have hammered the Universe into the desired shape. The theorists also reasoned that the hot, young universe would have behaved like a ball of electrically charged plasma, more fluid than gas. It would have contained matter stripped right back to its elementary constituents, assembling atomic nuclei and electrons only when temperatures had cooled sufficiently as a result of further expansion. They understood that there would have been a singular moment, just a few hundred thousand years after the Big Bang, when the temperature had dropped low enough to allow positively charged atomic nuclei (protons and helium nuclei) and negatively charged electrons to combine to form neutral hydrogen and helium atoms. This moment is called recombination. The light that would have danced back and forth between the charged particles in the ball of plasma was released, in all directions through space, and the Universe became transparent: literally, a ‘let there be light’ moment. Some of this light would have been visible, though there was obviously nobody around to see it. This is the oldest light in the Universe, known as the cosmic background radiation. Much like a bloody thumbprint at a cosmic crime scene, it left a pattern of temperature variations across the sky This radiation would have cooled further as the Universe continued to expand: estimates in 1949 suggested it would possess today a temperature of about 5 degrees above absolute zero (or -268oC), corresponding to microwave and infrared radiation. This estimate was largely forgotten, only to be rediscovered in 1964. A year later, as physicists scrambled to build an apparatus to search for it, the American radio astronomers Arno Penzias and Robert Wilson found it by accident. This discovery changed everything. The cosmic background, witness to events that had occurred when the Universe was in its infancy, was getting ready to testify. The tiny concentrations of matter produced by quantum fluctuations and imprinted on the larger universe by cosmic inflation would have made the cosmic background very slightly hotter in some places compared with others. This left a pattern of temperature variations in the cosmic background across the sky, much like a bloody thumbprint at a cosmic crime scene. These small temperature variations were detected by an instrument aboard NASA’s Cosmic Background Explorer satellite, and were reported in 1992. George Smoot, who had led the project to detect them, struggled to find superlatives to convey the importance of the discovery. ‘If you’re religious,’ he said, ‘it’s like seeing God.’ The evidence was in. We owe our very existence to anisotropies in the distribution of matter created by quantum fluctuations in the early, post-Big Bang universe, impressed on the larger universe by cosmic inflation. But cosmic inflation could not fix the problems posed by the physics of galaxy formation and the rotations of stars, and it could not solve the problem posed by the missing density. Curiously, part of the solution had already been suggested by the irascible Swiss astronomer Fritz Zwicky in 1933. His efforts had been forgotten, only to be rediscovered in the 1970s. Galaxies are much larger than they appear, suggesting that there must exist a form of invisible matter that interacts only through its gravity. Zwicky had called it dunkle Materie: dark matter. Each spiral galaxy, including our own Milky Way, is shrouded in a halo of dark matter that was essential for its formation, and explains why stars in these galaxies rotate the way they do. This was an important step in the right direction, but it was not enough. Even with dark matter estimated to be five times more abundant in the Universe than ordinary visible matter, about 70 per cent of the Universe was still missing. Astronomers now had pieces of evidence from the very earliest moment in the history of the Universe, and from objects much later in this history. The cosmic background radiation is about 13.8 billion years old. But nearby galaxies whose distances can be measured using Cepheid variable stars are much younger. We can get some sense of this by acknowledging that light does not travel from one place to another instantaneously. It takes time. It takes light eight minutes to reach us from the surface of the Sun, so we see the Sun as it appeared eight minutes ago, called a ‘look-back’ time. But the Cepheids are individual stars, so their use as standard candles is limited to nearby galaxies with short look-back times of hundreds of millions of years. To reconstruct the story of the Universe, astronomers somehow had to find a way to bridge the huge gulf between these points in its history. It is possible to study more distant galaxies but only by observing the totality of the light from all the stars contained within them. Astronomers realised that when an individual star explodes in a spectacular supernova it can light up an entire galaxy for a brief period, showing us where the galaxy is and how fast it is being carried away by expansion. Look-back times could be extended from hundreds to thousands of millions of years. A certain class of supernova offered itself as a standard candle, and the distances to their host galaxies could be calibrated by studying supernovae in nearby galaxies that possessed one or more Cepheid variable stars. The expectation was that, following the Big Bang, the rate of expansion of the Universe would have slowed over time, reaching the rate as we measure it today using the Hubble-Lemaître law. According to the Einstein-de Sitter version, it would continue to decelerate into the future, eventually coming to a halt. But when astronomers started using supernovae as standard candles in the late 1990s, what they discovered was truly astonishing. The rate of expansion is actually accelerating. Further data suggested that the post-Big Bang universe had indeed decelerated, but about 5 billion years ago this had flipped over to acceleration. In a hugely ironic twist, the fudge that Einstein had introduced in his equations in 1917 only to abandon in 1932 now had to be put back. Einstein had added an extra ‘cosmological term’ to his equations, governed by a ‘cosmological constant’, which imbues empty space with a mysterious energy. The only way to explain an accelerating expansion was to restore Einstein’s cosmological term to the Big Bang theory. The mysterious energy of empty space was called dark energy. I like to think of this as a period when the Universe was singing In 1905, Einstein had demonstrated the equivalence of mass (m) and energy (E) through his equation E = mc2, where c is the speed of light. It might come as no surprise to learn that when the critical density of matter is expressed instead as a critical density of mass-energy, dark energy accounts for the missing 70 per cent of the Universe. It may also seal its ultimate fate. As the Universe continues to expand, more and more of it will disappear from view. And, as the Universe grows colder, the matter that remains in reach may be led inexorably to a ‘heat death’. How do we know? More answers could be found in the cosmic background radiation. The theorists had further reasoned that competition between gravity and the enormous pressure of radiation in the post-Big Bang ball of plasma would have triggered acoustic oscillations – sound waves – wherever there was an excess of matter. These would have been sound waves propagating at speeds of more than half the speed of light, so even if there had been someone around who could listen, these were sounds that could not have been heard. Nevertheless, I still like to think of this as a period when the Universe was singing. The acoustic oscillations left tell-tale imprints in the temperature of the cosmic background, and in the large-scale distribution of galaxies across the Universe. These imprints cannot be modelled without first assuming a specific cosmology, in this case the Big Bang theory including dark matter and dark energy. Modelling results reported in 2013 tell us what kind of universe we live in – its total density of matter and energy, the shape of space, the nature and density of dark matter, the value of Einstein’s cosmological constant (and hence the density of dark energy), the density of visible matter, and the rate of expansion today (the Hubble constant). This is how we know. But the story is not over yet. Astronomers continued to sharpen their understanding of the history of the Universe through further studies of Cepheids and supernovae using the Hubble Space Telescope. Because these are studies based on the use of standard candles to measure speeds and distances, they provide measurements of the Hubble constant and the rate of expansion later in the Universe’s history that do not require the presumption of a specific cosmology. The Hubble constant and rate of expansion deduced from analysis of the acoustic oscillations is necessarily a model-dependent prediction, as it is derived from events much earlier in the Universe’s history. For a time, prediction and measurement were in good agreement, and the Big Bang theory looked robust. Then from about 2010 things started to go wrong again. As the precision of the observations improved, the predictions and the measurements went separate ways. The difference is small but appears to be significant. It is called the Hubble tension. The Universe appears to be expanding a little faster than we would predict by modelling the acoustic oscillations it experienced in infancy. Imagine constructing a bridge spanning the age of the Universe, begun simultaneously on both ‘early’ and ‘late’ sides of the divide. Foundations, piers and bridge supports have been completed, but the engineers have now discovered to their dismay that the two sides do not quite meet in the middle. Matters have been complicated by the development of different kinds of standard candle that are a little more straightforward to analyse than the Cepheids, and rival teams of astronomers are currently debating the details. We should know in another couple of years if the tension is real. And if it is real, then one way to fix it is to tweak the Big Bang theory yet again by supposing that dark energy has weakened over time, equivalent to supposing that Einstein’s cosmological constant is not, in fact, constant. Some tentative evidence for this was published in March this year. And there is yet more trouble ahead. The James Webb Space Telescope, launched on Christmas Day in 2021, can see galaxies with look-back times of more than 13 billion years, reaching back to a time just a few hundred million years after the Big Bang. Our understanding of the physics based on the current theory suggests that, at these look-back times, we might expect to see the first stars and galaxies in the process of forming. But the telescope is instead seeing already fully formed galaxies and clusters of galaxies. It is too soon to tell if this is a crisis, but there are grounds for considerable uneasiness. Some cosmologists have had enough. The Big Bang theory relies heavily on several concepts for which, despite much effort over the past 20 to 30 years, we have secured no additional empirical evidence beyond the basic need for them. The theory is remarkably successful yet full of explanatory holes. Cosmic inflation, dark matter and dark energy are all needed, but all come with serious caveats and doubts. Imagine trying to explain the (human) history of the 20th century in terms of the societal forces of fascism and communism, without being able to explain what these terms mean: without really knowing what they are, fundamentally. In an open letter published in New Scientist magazine in 2004, a group of renegade cosmologists declared: In no other field of physics would this continual recourse to new hypothetical objects be accepted as a way of bridging the gap between theory and observation. It would, at the least, raise serious questions about the validity of the underlying theory. This is simply the scientific enterprise at work. Answers to some of our deepest questions about the Universe and our place in it can sometimes appear frustratingly incomplete. There is no denying that, for all its faults, the present Big Bang theory continues to dominate the science of cosmology, for good reasons. But the lessons from history warn against becoming too comfortable. There is undoubtedly more to discover about the story of our Universe. There will be more surprises. The challenges are, as always, to retain a sense of humility in the face of an inscrutable universe, and to keep an open mind. As Einstein once put it: ‘The truth of a theory can never be proven, for one never knows if future experience will contradict its conclusions.’ Source of the article

GOATReads: Literature

Mute Compulsion

Alex needs a place to stay, just for a few days. After that, she plans to appear at a party held at the seaside mansion of her former love, Simon. What she wants is to move back into the mansion; as such, she hopes that Simon will reunite with her. Alex’s interest in Simon is obsessive, even desperate. And yet, there is nothing in Emma Cline’s 2023 novel The Guest to suggest that Alex’s fixation is any deeper than needing a place to stay, for as long as she can manage. Nor is it ever clarified if she considers herself a sex worker. There is a lot about her that we don’t know. Reviews of the novel on sites like Goodreads and Reddit will mention Alex’s flat characterization—her seeming lack of depth or backstory, her absence of introspection, her surface-level thinking. This opacity, however, is a deliberate strategy of Cline’s. After all, it doesn’t quite matter how Alex thinks about herself. Instead, the novel focuses on how sheer material compulsion means that she is forced to subsume her desires to Simon’s: to try to please him, to look how he wants her to look, to act how he wants her to act. She needs to do everything right, at risk of having nothing. In crafting this opacity, Cline is resisting the “trauma plot”: a form of expression of character—as Parul Sehgal has described it—that has become increasingly common in contemporary fiction. The commercial success of stories of personal suffering, Sehgal argues, has “elevated trauma from a sign of moral defect to a source of moral authority, even a kind of expertise.” By contrast, Cline’s Alex is a figure utterly devoid of authority. Her only expertise is in the hard work of figuring out how to survive, despite having no money and no place to live. Now let’s go backwards in the timeline of the novel’s author, Emma Cline. It’s 2017. And Cline’s ex-boyfriend is suing her. In the lawsuit, Chaz Reetz-Laiolo alleges that Cline plagiarized from his unpublished writing and used the material in her first novel, The Girls. Narrated by a woman who had, in her youth, been caught up with the Manson Family, The Girls was a splashy book when it was released in 2016. In a three-book deal, Random House paid Cline a $2 million advance for it. In his suit against Cline, Reetz-Laiolo was represented by the law firm Boies Schiller Flexner. This was the same law firm that represented Harvey Weinstein, the notorious Hollywood producer, as he was fighting sexual harassment and assault allegations. Weinstein also hired private investigators to construct “dossiers” about the women whom he thought would expose him. These dossiers were meant to shame them: with details of their alleged sexual histories, for example, or pictures and messages showing how they continued to be friendly with Weinstein after he abused them. His lawyer, David Boies, knew about these dossiers and went along with the plan. In 2017, Boies sent Cline a draft of Reetz-Laiolo’s complaint, saying that he planned to file it in court if she didn’t agree to a settlement. This draft included the same kind of dossier that Weinstein was even then employing. Titled “Cline’s History of Manipulating Older Men,” it featured details of her ostensible sexual past, including her private text messages and photos. This was to be used as evidence to corroborate Reetz-Laiolo’s claim that Cline was not, as the document read, “the innocent and inexperienced naïf she portrayed herself to be.” She was instead often prone to manipulating men to her benefit, extracting gifts and money. They were, basically, threatening to use this information to discredit Cline for the jury. The “Cline’s History” section was later removed from the filing. The New York Times and The New Yorker had just published articles about the allegations against Harvey Weinstein, and another piece about Weinstein’s hiring of private investigators was about to appear. Moreover, Cline’s lawyers included Carrie Goldberg, who has represented many victims of harassment, sexual shaming online, and revenge porn. Cline claimed that her ex had been abusive throughout their relationship; the New York Times reported that he was violently jealous, and that when Cline sold The Girls to Random House, Reetz-Laiolo threatened her again. Given Cline’s new high status, he warned, people “might be interested in naked photos of her,” or maybe they would want to read a “tell-all article about their relationship.” In her wanderings through high-priced real estate, Alex is often aligned in the novel with the other household employees she encounters, although they are busy with their respective areas of work while she is mainly loafing about and observing them and other house guests. She notes the way people will try to dig into the staff’s backstories, to demonstrate “how comfortable they were fraternizing” with them. This fraternizing becomes another service that is expected of people who are already working, like when Stevens in Kazuo Ishiguro’s The Remains of the Day (1989) worries over how to “banter” with his new American boss because he wants to please him. “She had experienced her own version of it,” we read of Alex, reflecting on the guests who demanded to fraternize with the staff: “the men who asked her endless questions about herself, faces composed in self-conscious empathy. Waiting with badly suppressed titillation for her to offer up some buried trauma.” The revelation of one’s inner life, in The Guest, is simply one more bit of compliance that people might extract from her. Tell us your story: Make it traumatic, so we can feel good about your employment here, your service to us, the little bit of money you are making doing what we ask of you in this gorgeous home. Her story would be a form of value added, amplifying their enjoyment, elevating their transactions with her by enabling them to believe that if Alex is “bad” in some way, she is nevertheless wounded, and so deserving of their interest—taking her out for dinner, giving her a place to stay for the night. Having sex with her would be more than just self-serving then, as an act of rightful charity on their part. And yet, this revelation is additional work that Alex blankly refuses. She provides them nothing with which to cover over the basic fact of their power over her: power that they pay for with money. The economic relations are left simply to stand. In 2018, a judge dismissed the plagiarism case against Cline, the one brought by her ex-boyfriend and his lawyers who were simultaneously defending Harvey Weinstein. Two years later, in 2020, Cline published a short story in the New Yorker, called “White Noise.” The point-of-view is Weinstein’s. We find Weinstein at home preparing to appear in court the following day, and his wandering thoughts delve into his own cunning deployment of the trauma plot. His lawyers counsel him to dress raggedly and use a walker for court performance, so as to extract sympathy. This advice brings to his mind other things Weinstein has said when trying to illicit a woman’s submissiveness: “my mother died today, he said, watching the girl’s face change.” Meanwhile, next door, a new neighbor has moved in: American writer Don DeLillo. It occurs to Weinstein that he should produce an adaptation of the novel White Noise. This will restore him to his rightful social status, he believes. He doesn’t know DeLillo’s work at all well: he mistakes the first line of Thomas Pynchon’s Gravity’s Rainbow, “A screaming comes across the sky,” for the opening of White Noise. The line appeals to him because it is about what he thinks of as a “rending of the known world,” and this is how he understands the case against him, and the moment it expresses: a screaming across the sky, a rending. Weinstein hopes his moment of crisis may be repaired through the lionization of another male creative—through the patrilineage connecting one great man to the next—fortified by his production of the White Noise film. He even imagines that, despite the charges against him, his granddaughter will love him, because she will be in his debt when she gets to intern on the film set, and inevitably DeLillo becomes a friend and writes her a college recommendation letter. Thinking about all this future promise, Weinstein texts a friend, “we as a nation are hungry 4 meaning.” Surely his own trial is the best evidence of that. Let’s return then to The Guest. We have left Alex refusing to tell any sad stories and finding herself dependent on the whims of wealthy, powerful men for life’s basic necessities. Alex is “a reluctant reader of her own self,” according to Jane Hu. But, by contrast, I think that nothing like a reading of her “self” is relevant to the struggle Alex faces in making it through each single day. Having a legible self is just another responsibility to others that she can’t afford. Hu argues also that “The Guest largely remains at the level of mere forms, rarely venturing to probe what might be troubling the waters beneath such glistening stillness.” Yet the novel is, rather, full of images of things emerging from beneath the surface. And this is more and more true as it progresses. As Alex runs out of resources, she gets more desperate and less put together. Pools get muddier, people’s faces get more wary in her presence, worries start to trickle up, and she is always waiting for a man she stole money from to find her and hurt her. This gradual oozing up metaphorizes the whole reality beneath the novel’s apparent surface: first, the fact of the tremendous wealth of everyone around Alex; second, the way she is controlled by and kept out of that wealth’s orbit, even as she passes through posh homes and luxury vehicles. Cline’s writing of the novel was inspired, in part, by John Cheever’s short story, “The Swimmer.” Here, a man sets out on a swimming tour of the neighborhood, going from party to party, toward a home where it turns out that he is not wanted. He is passing through a landscape of wealth from which he is excluded ultimately: “Oh, how bonny and lush were the banks of the Lucinda River!,” he thinks. “Prosperous men and women gathered by the sapphire colored waters while caterer’s men in white coats passed them cold gin.” This is precisely Simon’s world. When Alex shows up at his party in the novel’s climactic scene, she becomes a figure of the oozing return of the repressed herself. Far from manifesting a “glistening stillness,” she is its very interruption, destroying the illusion by her sheer presence: messy and tired, seeking a reconciliation with Simon that we know is not coming. We are waiting for some description of the horrified look on his face. Waiting for his reaction to seeing her on his property, uninvited and unwelcome, having broken out of her expected role as a woman subservient to his whims and oriented only by his needs. The quintessential character of today’s trauma novel, according to Sehgal, is “withholding, giving off a fragrance of unspecified damage,” at least at first. She is “Stalled, confusing to others, prone to sudden silences and jumpy responsiveness.” We sense constantly that something “gnaws at her, keeps her solitary and opaque, until there’s a sudden rip in her composure and her history comes spilling out.” This withholding, stalled figure in The Guest is Alex. And, indeed, the novel plays with the tension of us waiting for a moment of dramatic revelation. Here she is in the novel’s final scene, smiling in Simon’s direction, wishfully thinking that “Everything had turned out fine.” But he doesn’t come over to her. Alex thinks, instead, “this was all wrong”—“his eyes seem to look at something beyond her.” The novel concludes with them in this frozen diorama. No sudden rip. No spilling out. No revelations and reconciliations. He is looking right past her. He couldn’t care less about her trauma plot. The Guest is an instance, then, of what Christina Fogarasi has described as the “anti-trauma trauma novel,” in which trauma as a form of narrative “prosthesis” is refused, precisely because it “abstains from mentioning the systemic forces undergirding” anyone’s suffering. Who is Alex? How did she come to be here? The novel’s refusal to answer these questions is a way, too, of refusing the authority of the Weinstein-style “dossier,” which can exculpate or shame, excuse or condemn. Instead, all that Cline leaves on display is the sheer fact of economic domination: the “mute compulsion of economic relations,” as Marx famously put it, which “seals the domination of the capitalist over the worker.” Cline pinpoints the stark truth of this domination within a contemporary landscape of unspecified, informal sex work, on the fringes of a society of spectacularly wealthy asset holders. In other words, Cline pinpoints a landscape not unlike the creative industries: where women often find themselves doing what they can to attract and sustain the attention of people like Weinstein, who have the power to make careers for them or let them sink into oblivion. The trauma plot and the slut-shaming dossier are actually parallel formations, reveals The Guest. They are both formations that deliberately look away from material reality—the determining force of the law of capital in shaping what a woman is willing to do for a man—and, instead, locate particular compulsions and proclivities in a woman’s traumatic back story, compromised morality, and history of intimate entanglements. What Weinstein’s case made so clear—as did Jian Ghomeshi’s in Canada—is the weaponization of the personal story (including the plunge into traumatic interiority) in the busy activity of figuring out how a woman really felt about a man after he did what he did, not just what affect she performed but how she really felt, in her heart of hearts. This is all deployed to disguise and excuse the actual domination that compels people to do horrible things, like maintain relationships with evil men, and that compels people even to feel shitty feelings, like gratitude toward these demons, or sympathy, or—dare I say—love.  Source of the article

Society needs hope

Youths around the world are in a profound crisis of despair. Adults must help them to believe that the future will be better Young people around the world are experiencing an unprecedented crisis of unhappiness and poor mental health. Many observers blame the expansion of social media that began in 2012-13, as well as the long-term negative effects of the COVID-19 pandemic on the social lives of the young, and no doubt those things have exacerbated the decline in mental health. But the causes of the current crisis run deeper. They have to do with the increasingly uncertain futures that the young face due to the changing nature of jobs and the more complex skill sets required to succeed in them; extreme political polarisation and misinformation; an erosion of global norms of peace and cooperation; the uncertainties posed by climate changes; and the decline in traditional civil society organisations – such as labour unions and church groups. Meanwhile, families play a bigger role in providing financial and social support in poor and middle-income countries than in rich ones, serving as a buffer in the face of this perfect storm of trends. There are many ways in which this crisis of unhappiness expresses itself. One is the recent disappearance of a long-established U-shaped curve in the relationship between age and happiness. Until recently, the nadir or low point was in the mid-life years, and both the young and the old had higher levels of happiness and other dimensions of wellbeing. This relationship held in most countries around the world, except for those that are extremely poor, have high levels of political violence, or both. Yet, since 2020, the relationship has become a linear upward trend in many countries in North America and Europe – and several in Latin America and Africa as well as Australia. This means that the least-happy group in these countries is now the young (those aged 18-34) and the happiest are those over the age of 55. A more extreme manifestation has been the increase in suicides, rise in reported anxiety and depression, and ‘epidemic’ levels of loneliness among the young, particularly in the United States and the United Kingdom. The US already has a crisis of ‘deaths of despair’; first identified as a problem of middle age by the economists Anne Case and Angus Deaton in 2015, such premature deaths due to suicide, drug overdoses and alcohol and other poisonings are now being seen in greater numbers in the young, especially those Americans between the ages of 18-25. Youth unhappiness trends are particularly extreme in the US, in part due to its much more limited social support system for those who fall behind, the exorbitant costs of higher education and healthcare, and very high levels of gun violence – including in schools. As a result, there is a large and growing mortality gap between Americans with and without college degrees. Those with degrees live eight more years, on average, than those without. These are potentially overwhelming challenges for young people to navigate. This crisis matters because of the human costs, such as reduced longevity and significant gaps in quality of life, because those with mental illness are much less likely to complete higher education, and more likely to be in poor health and experience homelessness and other kinds of deprivation. They are also less likely to be in stable jobs and/or long-term relationships. Yet it also has deeper and more far-reaching implications as it reflects a lack of hope for the future among an entire generation in many countries, suggestive of a broad systemic failure that we do not fully understand. My research – and that of some others – shows that hope is a key factor in health and longevity, productivity, educational attainment and stable social relationships, among other things. The reason hope is so important, even more than current levels of wellbeing, is that integral to hope (as opposed to optimism) is having agency and potential pathways to a better future. Psychiatrists, for example, while they don’t provide examples of how to restore hope, note that it is the critical first step to recovering from mental illness. Individuals with hope are more likely to believe in their futures and to invest in them, as well as to avoid risky behaviours that jeopardise them. In contrast, people in despair have reached a point where they literally do not care whether they live or die. Several studies find direct or indirect linkages between despair and other mental health disorders and misinformation and radicalisation, although there are also some studies that dispute the claim. While these manifestations are particularly extreme in the US, many other countries – especially in Europe – are also experiencing them. While I have worked on wellbeing for many years now, I began my career as a development economist. I was born in Peru, and from an early age was exposed to the long reach of poverty because of the research my father (a paediatrician at Johns Hopkins in Baltimore) did on infant malnutrition. He found that, with the right diagnoses and treatments, like addressing inadequate levels of key minerals such as copper and zinc, severely malnourished infants could recover and have healthy lives without cognitive and other kinds of impairment. At the time, neither knowledge of nor treatment for these deficits was widely available. My early exposure to those issues made me committed to better understanding the effects that poverty and inequality have on people’s lives and health. Development economics seemed to be the best tool kit for doing so. But I also became increasingly curious about the psychology involved, such as why and how people living in extreme deprivation could be so optimistic, generous and resourceful, as they are in Peru. I did my PhD dissertation on the coping strategies of the poor in the context of hyper-inflation and Shining Path terrorism in late-1980s Peru. What stood out was how resilient the poor population was but also how sophisticated in navigating incredibly difficult economic circumstances – such as exchanging their local wages into dollars overnight to prevent them from being quickly eroded by inflation. They were also learning what foods they could afford that were also healthy for their children. I still remember thinking that it was unlikely that consumers in the US, who have never faced such challenges, would have been able to navigate them with the same dexterity and optimism. And even now, more than 30 years later, my most recent surveys among low-income adolescents in both Peru and in Missouri find a sharp contrast between the high hopes and aspirations of Peruvian adolescents for education advancement and the very low hope of low-income American adolescents, especially white ones. My research among poor populations in Peru directly challenged traditional economic theory Low-income minority communities in the US are much more hopeful in general, and value education as a pathway, while the latter has eroded among low-income white groups. Remarkably, the parents of my white respondents did not support them attending college, while the minority parents and other members of their communities strongly supported their young people seeking higher education. The visionary school superintendent I worked with to launch the Missouri project, Art McCoy, was a living example of how community support – in this case, in schools – could dramatically change the life trajectories of adolescents from minority communities (I will return to McCoy later). I was incredibly lucky, as a young scholar at the Brookings Institution in Washington, DC right after my PhD, to be in an environment where the economists were interested in my explorations into the psychology of poverty. The results of the survey research I was doing among poor populations in Peru directly challenged traditional economic theory. I found that the most upwardly mobile respondents in my sample were the most pessimistic in their assessments of their past economic progress, while poorer respondents were more positive in their assessments. We had objective data on how these respondents had fared in terms of income gains and losses, and we were able to confirm that there was no sampling error or bias in our survey questions. Thus, the question remained, why? Was it rising expectations? Loss aversion given the context of rapid but unstable growth? Newly acquired knowledge of how much more income the wealthy still had, despite their own upward progress? Character traits? It turns out it was a combination of all these things. Again, I was incredibly lucky to be in an environment where prestigious economists such as Henry Aaron, George Akerlof and Alice Rivlin – and Daniel Kahneman, the first psychologist to receive the Nobel Prize in economics – encouraged me to explore more, including using tools from other disciplines beyond economics. As a result, I got to know a small but incredibly talented group of economists, such as Richard Easterlin, Andrew Oswald and Angus Deaton, and top psychologists, such as Kahneman and Ed Diener, who were collaborating in combining the qualitative survey approach of psychologists with the econometric, maths-based techniques of economists. There was a great deal of scepticism early on from economists – indeed, many of them thought we were nuts! Yet the many puzzles we uncovered and the increasing use of the approach by a range of scholars ultimately resulted in it developing into a new science of wellbeing measurement. The new metrics added a great deal to our understanding by incorporating human emotions, aspirations and character traits into how we think about, analyse and model economic behaviours. By now, it has become almost mainstream to do so. Indeed, many governments around the world, led by the UK’s early effort in 2010 (in which I was lucky to play a small role) have begun to incorporate wellbeing metrics in their official statistics as a complement to traditional income-based data and as a tool in policy design and assessment, such as in cost-benefit analysis or health policy and environmental policy innovations. The OECD now has guidelines for best practices for national statistics offices around the world that want to utilise the metrics. (Unfortunately, despite having similar recommendations for how to do this in the US from a National Academy of Sciences panel – on which I participated – the US did not follow suit.) Most recently, the United Nations formed a commission to develop and recommend indicators of progress to complement GDP that can be adopted by countries around the world. The field has evolved from analysing the determinants of happiness and other dimensions of wellbeing (income matters but other things such as health, friendships, and meaning and purpose in life matter even more) to exploring what impact increases in wellbeing have on individuals and society as a whole. We consistently find that higher levels of wellbeing result in longer lives, better health, more productivity in the labour market, and more stable long-term relationships. Hope, meanwhile, is more important in determining those outcomes than is current happiness or life satisfaction. Hope is not just the belief that things will get better – that is optimism – but the conviction that individuals have the agency to make their lives better. During this intellectual journey, I increasingly began observing large contrasts between the hope and optimism of the poor in Latin America and the deep despair among the poor and near-poor in the US. My empirical surveys, based on Gallup data, confirmed those gaps. At the same time, Case and Deaton released their first paper on deaths of despair in the US. As I explored across race and income cohorts in the US, I found that it was low-income white people who had the lowest levels of hope and were also the cohort most represented in the deaths. I realised that the metrics we had developed could serve as warning indicators of populations at risk. While people with hope, meaning and agency have better outcomes, those in despair lack these emotions and traits and have lost their life narratives. Because of that, they are vulnerable to risky behaviours that jeopardise their futures; they typically do not respond to incentives; and they are more vulnerable to misinformation and related conspiracy theories that can fill the vacuum. I hope that we can now use the metrics as one of the tools we need to solve the mental health crisis among our young people. As I noted earlier, the causes of the youth mental health crisis – in the US and beyond – are deep and far reaching, and defy simple solutions. Yet if we highlight one of the key drivers of the crisis – the deep uncertainty the youth of today have about their futures and their ability to get jobs that will enable them to support families and have a reasonable quality of life – there is little doubt that education is a critical part of the solution. And central to that are the innovations that make education more accessible for those with limited means and resources, and train them for the new and complex skills required for the labour markets of tomorrow. To make education more accessible and relevant, we need to rethink the way it is delivered, so that students are supported and mentored, and how they can acquire skills that are not typically developed in secondary school curricula. There is no one tried-and-true recipe for success, not least because education has a context-specific element that must be tailored to the populations and communities in which it is delivered. As such, it is critical to involve key stakeholders (eg, parents, students and communities) and the consumers of education (eg, parents and students) when implementing innovations, as changes in the way children are educated rarely succeed without their support. A related and even more important lesson is the value of mentorship, particularly in less-privileged contexts, where students may be the first in their families to attend post-high-school education or in which parents are absent for a variety of reasons, such as familial trauma or overstretched work schedules. My research finds that mentors – either in schools or in families or communities – are critical to guiding students as they make difficult decisions about investing in their education with limited information and, perhaps even more important, in sustaining their aspirations and goals in the face of negative shocks or other challenges. Seeing the important role of hope and aspirations in driving the efforts of the poor in Peru to pursue better lives is what inspired me to study emotions and wellbeing in economics and to focus now on the youth mental health crisis through that lens. The renaissance of debating clubs in Chicago public high schools has improved academic achievement Research also shows the important role of skills that are not usually part of high-school curricula but which expose students to things that can help them succeed in the labour market, such as financial literacy, self-esteem and communications skills. The Youthful Savings programme, based in New York and Santa Monica, teaches high-schoolers financial literacy, equitable and ethical business practices, and the importance of paying attention to mental wellbeing. Mentors are an important part of the programme’s successful record. An example of their success in helping young people is Jose Santana, a high-school student from the Bronx and the son of Dominican immigrants. He was not planning on college but was able to go with support from Youthful Savings. He credits the skills the programme gave him but, more importantly, the mentorship he received from Somya Munjal, a young entrepreneur of Indian origin and the founder of Youthful Savings, which inspired Santana to continue his education to increase his chances of becoming a successful entrepreneur himself. Munjal, too, had to struggle to finance her own education but became increasingly motivated to succeed as she found meaning and purpose in helping others along the way. Santana is only one of the many young people Munjal has inspired. An example of exposing youth to skills that they can use in the labour market is debating clubs in high schools. Debating requires good communication skills, the ability to listen to opponents, and to back one’s argument in fact-based reasoning, with calm and civil discourse. These are the skills that are quickly being eroded in today’s polarised and acrimonious environment. The renaissance of debating clubs in several Chicago public high schools has improved academic achievement and increased the hope, agency and engagement of the participating students, according to Robert Litan’s book Resolved (2020). Meanwhile, research by Rebecca Winthrop and Jenny Anderson for their book The Disengaged Teen (2025) confirms the importance of supporting students in developing skills beyond the usual academic curricula, such as creativity, exploration and agency. They categorise students as ‘resisters’, ‘passengers’, ‘achievers’ and ‘explorers’ – the last of which being the most engaged of all the groups. In the UK, the #BeeWell programme in the greater Manchester district conducts annual surveys that measure key wellbeing indicators and makes recommendations to communities based on their findings. Evaluations three years after the programme was implemented in 2019 across 192 schools throughout the district showed that significant action was taken in communities thanks to the findings of the programme, which are released privately to schools and publicly by neighbourhood. Meanwhile, community colleges around the US are playing an increasingly important role in helping low-income students attend and complete college. A leading-edge example is Macomb Community College (MCC) in Detroit, which in addition to its own faculty and curriculum provides a hub where local colleges and universities, including Michigan State University, offer courses so that students can go on to complete four-year degrees – an option that is invaluable for low-income students who often work and have families in the county, and cannot afford to move and pay room and board elsewhere. Approximately 63 per cent of the students who transfer from MCC go on to complete a four-year degree. He inspired students to invest and succeed in their education aspirations, with seemingly limitless hope MCC’s curricular innovations are paired with dedicated mentorship for each student who attends the college, as well as a programme that encourages civic discourse and attending lectures from outside speakers. This is unusual in a county that has as divided a population as Macomb. It combines autoworkers, newly arrived immigrants and a long-standing but historically discriminated-against African-American population. The college’s president emeritus Jim Jacobs told me that one objective is to show MCC students that they can thrive in the workforce without moving out of their county. Another programme aimed at inspiring low-income young adults is the micro-grants programme founded by Julie Rusk, the co-founder of Civic Wellbeing Partners in California. The programme provides small grants to low-income people, usually of minority origin, to initiate new entrepreneurial activities in Santa Monica. Most of the grantees are young, and the activities – which range from the arts to culinary initiatives to bike repair shops – benefit low-income communities in the city and have positive effects on the hopes and aspirations of the grant recipients. One reason for this success is the dedication of Rusk and her team to ensuring that their participants have the support they need to succeed. I close with the profile of Art McCoy, the amazing former school superintendent of the primarily African American Jennings school district in Saint Louis, Missouri. When McCoy took over the superintendency, Jennings had one of the worst completion records of Missouri’s high schools. It then achieved an impressive 100 per cent graduation, career and college placement rate, which is significantly higher than those of the other public high schools in the same district. While McCoy’s success is no doubt due to many complex factors, it is evident that a critical component was his ability to inspire students to invest in and succeed in their education aspirations, with seemingly limitless energy and hope. McCoy has an unparalleled ability to communicate to a wide range of audiences, including but not only students, with genuine empathy, compassion and humour, while encouraging effort, initiative and persistence. Working with McCoy also showed me that dedicated mentorship could transform a failing school district into a cohesive community that provides support and inspiration to take on daunting challenges. I have never ended a conversation with him (and there have been many) without a restored sense of hope, even during our divided and difficult times. He is now active in supporting young adults in deprived parts of Saint Louis to embrace entrepreneurship and independence as they begin their post-education careers. At a time where young people are suffering from doubts and anxiety about what their futures hold, it is people like Art McCoy, Somya Munjal and Julie Rusk who help them believe in themselves and their ability to overcome the challenges they face. While these challenges are complex and daunting and we do not have solutions for many of them, we do know that young people crippled with despair and anxiety are far more likely to withdraw from society. Restoring hope is not a guaranteed solution, but it is a critical first step. That is a lesson I earned early in my career from the poor people of Peru, and it still holds today in a very different context and time. Source of the article

GOATReads: Psychology

Why Does Every Global Event Feel Like a Crisis?

Understanding the psychology of presentism can help you adapt through change. Every generation believes it is living through the most dangerous, most consequential moment in history. Public policymakers call this presentism or presentism bias: our tendency to overestimate the singularity and existential weight of our own time. Psychologists would focus more on availability heuristics and recency bias: information that is most easily recalled, and more recent events tend to seem most important. It isn’t irrational. The Prussian and the Roman empires aren’t going to hurt you now. The eventual heat death of the universe isn’t worth losing sleep over. Threats that feel most relevant are the ones that could affect us directly. Nuclear brinkmanship in the 1960s, stagflation in the 1970s, and the 2008 financial crash were each experienced as uniquely defining. The greater the crisis, the more it commands our attention. Human cognition evolved to prioritize the immediate and tangible over the distant and abstract. That vigilance helps us survive, but it also narrows perspective, making it harder to see continuity with the past or to prepare for the future. Overlapping Transitions History rarely moves in clean breaks. Even in times of upheaval, remnants of the old order persist. Industrial and digital economies overlap. Secular and religious values compete. Generational attitudes toward work, family, and identity clash and cross-pollinate at the same time. Psychologically, this coexistence can feel destabilizing. The neat categories people expect, like “old versus new” and “Boomers versus Zoomers,” don’t map cleanly onto reality. New technology is never just simply better; it has risks and consequences. People find themselves in the middle of contradictions: leaders who preach change but make the same mistakes, workplaces that find new ways to make the same mistakes, families that are divided by different views and struggle to relate. This is where ambiguity tolerance plays an important role. It is trait that describes how comfortable people are with uncertainty, complexity, and mixed messages. For those with high ambiguity tolerance, the coexistence of old and new feels manageable or even energizing. They can sit comfortably with paradox and enjoy the complexity and unpredictability. But for those with low ambiguity tolerance, overlapping and uncertain transitions can feel threatening. The lack of clarity can produce anxiety, rigid thinking, or even backlash. Living by Other People’s Rules This tension is familiar across the lifespan. Entering the workforce, many young adults discover that workplaces are governed by norms and hierarchies shaped by older generations. The rules of belonging, from communication styles to work ethic to career expectations, often feel strange and inscrutable. Over time, individuals may find their own cohort rising into positions of influence. But later in life, the cycle reverses. Older adults re-enter environments increasingly governed by younger norms, from digital platforms to cultural spaces, where they must once again adapt to rules that don’t feel like their own. At both ends of the lifespan, individuals often have the least autonomy and the least power to impose the norms of their own age group. Adolescents adapt to adult rules. The elderly adjust to the changes brought in by younger generations. It’s a process of constant renegotiation and change. Whenever things seem to be settling in, life events and global changes can upend everything again. The overall process of continuous change is nothing new in the context of history, but the challenges are totally new from an individual perspective. Why the Present Feels So Heavy What makes periods of transition so emotionally charged is that they always involve a struggle over power and legitimacy. Established ways of doing things must continually prove their relevance to be adopted by newcomers, while emerging groups test different ideas, pushing the boundaries of what should endure and what should be reformed. Psychologists studying identity threat show that when long-held norms are challenged, groups often respond with defensiveness, nostalgia, or outright hostility. Those in power may exaggerate the risks of new approaches not only because they are unfamiliar but because they unsettle the group’s social position or role. When the costs of maintaining old structures fall on some while the benefits accrue to others, resentment can slowly build up. Workplaces as a crucible of change Workplaces are a good example of this. Ideally, succession planning provides a bridge between generations: Experienced leaders pass down not just technical knowledge but also the tacit know-how, networks, and cultural memory that keep an organization functioning. Done well, it is gradual, deliberate, and reciprocal. The outgoing group retains dignity and respect by sharing what they have built, while the incoming group gains confidence by learning within a supportive structure. Both sides invest in a future where their contributions endure. Changes are navigated through interpersonal networks instead of crises and knee-jerk responses. In reality, succession is often messy. Urgent demands of the present overwhelm long-term preparation. Transitions happen suddenly, with little time for mentoring, documentation, or shared reflection. New groups are left to reinvent processes from scratch; older groups feel discarded or unappreciated. Instead of continuity, organizations get torn apart. When resentment builds, people may be eager to discard inherited structures and processes wholesale, when they were not involved in sustaining and retaining the value of them. This is one of the reasons times of transition can feel so difficult. Not only are rules and norms shifting, but the difficult work of handing them over is often avoided or resisted. The demands increase without sufficient resources (psychological, social, material) to support the process of change. Without deliberate, well-managed succession, the weight of the present demands feels more intense, as each new group scrambles to rebuild what might have been passed on more gracefully and with less resulting damage. Resistance to change Decline is inevitable. Change can bring renewal, but when people cling to power long after their ability to wield it effectively, they often accelerate their own downfall. Bound by their own rules and hierarchies, leaders may resist change even as their capacity to shape those around them diminishes. Increasingly rigid or manipulative methods only hasten collapse. A long-standing leadership structure may still appear to wield power, but without moral authority or genuine respect, its rules become hollow and observed outwardly but ignored where and when it is not convenient. The weight of now Presentism and associated psychological biases mean we tend to see own era as uniquely consequential, more turbulent or unstable than anything that came before. In some sense, this is inevitable: the crises we face today are the ones we can most immediately feel. Recognizing presentism doesn’t make current crises less real, but it offers perspective. We are not the first to feel this way, and we will not be the last. Source of the article

Essence is fluttering

As Zhuangzi saw, there is no immutably true self. Instead our identity is as dynamic and alive as a butterfly in flight As Zhuangzi saw, there is no immutably true self. Instead our identity is as dynamic and alive as a butterfly in flight ‘Most people are other people,’ wrote Oscar Wilde. ‘Their thoughts are some one else’s opinions, their lives a mimicry, their passions a quotation.’ This was obviously meant as a criticism, but what exactly is the criticism? Most people are other people, in a different sense to how Wilde meant it: the vast majority of people are not me. The enormous size of this majority – billions to one – guarantees that there will be somebody better than me at anything I can think of. If I make a dress for the first time, I am wise to follow a pattern. If I cook a meal for the first time, I am wise to follow a recipe. This is (as far as I know) my first time living as a human being, so why wouldn’t it be wise for me to emulate a successful model of living, especially when there are so many candidate models, past and present? When Wilde was writing, literary culture had reached the pinnacle of Romantic individualism. In that culture, it was obvious what’s wrong with being other people: doing so is a betrayal of your true self. Each of us was thought to possess a unique, individual identity, sewn into the very fabric of our being. Walt Whitman celebrated ‘the thought of Identity – yours for you, whoever you are, as mine for me,’ and defined it as: ‘The quality of BEING, in the object’s self, according to its own central idea and purpose, and of growing therefrom and thereto – not criticism by other standards, and adjustment thereto.’ In the 20th century, the assumption would be famously challenged, for example by Jean-Paul Sartre, who proclaimed that humans come into existence with no definite identity – no inborn ‘central idea and purpose’ at all: ‘man is nothing else but that which he makes of himself’. This is a logically tricky point, since it’s unclear how something devoid of all identity could ‘make itself’ into anything. We can’t suppose, for example, that it makes itself according to its whims, or inclinations, or desires, since if it has any of those then it already has an identity of some sort. This gets us into a quandary. Simone de Beauvoir – Sartre’s partner in philosophy, life, and crime – responded to it by admitting that self-creation can work ‘only on a basis revealed by other men’. But popular culture has by and large responded by retreating uncritically to the old Romanticism. Apple’s Steve Jobs advised a graduating class at Stanford not to ‘let the noise of others’ opinions drown out your own inner voice’ and to ‘have the courage to follow your heart and intuition’, which ‘somehow already know what you truly want to become’. Advertisements for shampoo and travel money apps advise you to dig deep inside and find the True You. The question of how we each came to be pre-programmed with this unique true identity, this articulate inner voice, is left aside, as is the more troubling question of how the advertisers can be so confident in betting that your true self is going to love their products. How would things look, philosophically, if we cleared this Romantic notion out of the picture? I think we can get a sense of it by looking at the great philosophical tradition that flourished before the formation of the Qin dynasty in what is now China. The most well-known philosopher from this tradition is Confucius (Kong Qiu, 孔丘), classically believed to have lived from 551 to 479 BCE. In reality, books ascribed to him were probably written by multiple authors over a long period. The most famous of these, the Analects, propounds an ethical ideal based on emulating admirable examples. The philosopher Amy Olberding wrote a whole book devoted to this topic, Moral Exemplars in the Analects (2012). For Confucius, being ‘other people’ is precisely what you should be aiming at – as long as you emulate praiseworthy people like the great sage-kings Yao and Shun or indeed Confucius himself. The objection that this would betray your true inborn identity doesn’t come up. The idea that we each have an individual, inborn, true identity doesn’t seem to appear in this tradition. What the tradition does recognise is role-identities. Confucius was concerned that these were being lost in his time. In the Analects, he declares that his first act in government would be to ‘rectify names’ (zheng ming, 正名). This is a complex concept, but some light is shed on its meaning by another passage, in which he is asked about zheng (政) – social order – and replies: ‘Let the lord be a true lord, the ministers true ministers, the fathers true fathers, and the sons true sons.’ Confucius feared the loss of the traditional zheng, which he associated with the recently collapsed Western Zhou Kingdom, leading to a social chaos in which lords, ministers, fathers, and sons no longer played their appropriate roles. ‘Rectifying names’ could mean ensuring that people live up to their names – not their individual names but the names of their social role or station (ming, 名, can be used to mean ‘name’ but also to denote rank, or status). To the question ‘Who am I?’, Confucius would like you to reply with your traditionally defined social role. As to how you should play that role, the ideal would be to emulate a well-known exemplary figure who played a similar role. Under the Han dynasty, which adopted Confucianism as a sort of official philosophy, many catalogues of role-models were produced, for instance Liu Xiang’s Traditions of Exemplary Women (Lie nu chuan, 列女傳), full of models of wifehood, motherhood, ladyhood, etc. The ethical ideal is not to replace a conformist identity with an individual one. It is to get rid of identity altogether Confucius’s philosophy was opposed in his era, but not by Romantic individualists. On one side was Mozi (or Mo Di, 墨翟), who proposed that we should take nature as a model rather than past heroes. For this to make sense, nature had to be anthropomorphised into having cares and concerns – a comparison could be made with the ancient Greek and Roman Stoics. On the other side was Zhuangzi (Zhuang Zhou, 莊周), perhaps the strangest philosopher of any culture, and a central focus of my book, Against Identity: The Wisdom of Escaping the Self (2025). Zhuangzi (again, the writings ascribed to him – called the Zhuangzi – were probably written by multiple authors) rejected Confucian role-conformism. He argued that you shouldn’t aim to be a sage-king, or an exemplary mother, or any other predetermined role-identity. You shouldn’t aim, in Wilde’s terms, to be other people. In our highly individualistic culture, we can’t help but expect this line of thinking to continue: just be yourself! But this is not what Zhuangzi says. Instead, he says: ‘zhi ren wu ji (至人無己),’ translated as: ‘the Consummate Person has no fixed identity’ or ‘the ultimate person has no self’. The ethical ideal is not to replace a conformist identity with an individual one. It is to get rid of identity altogether. As the philosopher Brook Ziporyn puts it, ‘it is just as dangerous to try to be like yourself as to try to be like anyone else’. Why is it dangerous? In the first place, attachment to a fixed identity closes you off from taking on new forms. This in turn makes it difficult for you to adapt to new situations. In her book Freedom’s Frailty (2024), Christine Abigail L Tan puts it this way: ‘if one commits to an identity that is fixed, then that is already problematic as one does not self-transform or self-generate.’ Borrowing a term from psychology, we could call this the problem of ‘identity foreclosure’. The American Psychological Association defines ‘identity foreclosure’ as: premature commitment to an identity: the unquestioning acceptance by individuals (usually adolescents) of the role, values, and goals that others (eg, parents, close friends, teachers, athletic coaches) have chosen for them. But the radical message of the Zhuangzi is that it can be just as dangerous a ‘foreclosure’ to accept the role, values and goals that you have chosen for yourself. Doing so cuts you off from the possibility of radically rethinking all of these under external influences. Indeed, it drives you to resist external influences, for a simple reason. We have a strong survival instinct – an urge to continue existing. But continuing to exist means retaining the form that makes you yourself and not something else. Turning into a corpse, obviously, doesn’t count as surviving, but neither would turning into something too radically different from what you fundamentally are. I fear waking up tomorrow with my body, memories and personality replaced by those of somebody else about as much as dying in my sleep; indeed, it might well count as me dying. Surviving means remaining the same in crucial, definitive respects. But this means that the more narrowly you define yourself, the more defensive you will feel against external influences that might change you. The term ‘identity’ lends itself naturally to this sense of self-definition. It comes from the Latin identitas, with the root idem, which means ‘same’. A common expression is unus et idem, ‘one and the same’. Your identity is whatever must stay the same in order for you to remain you. A narrow identity makes heavy demands of consistency, upon you and also upon the wider world. If your identity is bound up in being a harness-maker, then your survival (under that identity) requires not only that you keep working at making harnesses; the harness industry must also stay viable. If the industry dies out, for example with the coming of automobiles, then you will find yourself in a desperate identity crisis, panicked at the idea that nothing you can now be will count as what you had hitherto recognised as yourself. Hopefully you will find new things by which to define yourself. But you could have saved yourself the anguish by not binding up your identity with something so specific in the first place. Now suppose that your identity is bound up with certain religious or political beliefs. In that case, your survival instinct will be put on alert anytime anything threatens those beliefs. The more convincing an argument against them seems, the less able you will be to hear it. The more appealing an alternative seems, the harder you will push it away – for fear of changing and losing yourself in the change. In this way, foreclosing on a fixed identity, even one that you have chosen, will push you to insulate yourself from external influences. We can think of an example from Sartre: ‘the attentive pupil who wants to be attentive exhausts himself – his gaze riveted on his teacher, all ears – in playing the attentive pupil, to the point where he can no longer listen to anything.’ But the world is always changing, in ways that we cannot predict. When attachment to a fixed identity drives us to close ourselves off from external influences, those influences might otherwise have been very valuable in guiding us through change and uncertainty. There are many stories of Australian colonists rejecting the Indigenous knowledge that could have helped them survive in a harsh and unknown environment, due to their excessive attachment to an idea of themselves as scientifically and racially superior. This idea was incompatible with the suggestion that they might have something to learn from those they saw as ‘naked savages’. When we seek a definite identity, we betray our true nature as fundamentally fluid and indeterminate There is another reason that trying to be yourself, in the sense of some fixed identity, is dangerous. To appreciate it, we must ask: where did you get the idea of that fixed identity? Remember that we are now imagining ourselves in a cultural context devoid of the Romantic notion that each of us is born with an inborn self. The compiler of the earliest surviving text of the Zhuangzi, Guo Xiang (252-312 CE), also provided a commentary, which outlines a very different notion of self. Guo’s commentary draws out elements in the Zhuangzi that criticise the Confucian ethic of model-emulation, which Guo calls following ‘footprints’ (ji, 跡). For example, Guo comments on one passage as follows: Benevolence and righteousness naturally belong to one’s innate character [qing, 情], but beginning with the Three Dynasties, people have perversely joined in such noisy contention over them, abandoning what is in their innate characters to chase after the footprints of them elsewhere, behaving as if they would never catch up, and this too, has it not, resulted in much grief! The reference to ‘innate character’ here might mislead us into reading this passage in a Romantic way – a familiar celebration of being your true, inborn self rather than following in the footsteps of others. But when Guo uses terms like this, or ‘original nature’ (benxing, 本性), he appears to mean something quite different. Tan explains: ‘original nature (benxing, 本性) does not actually mean that it is unchanging and fixed, or that it is inborn, for it simply means unfettered.’ Zhuangzi’s view, as interpreted by Guo, is that when we seek a definite identity, we betray our true nature as fundamentally fluid and indeterminate. We end up pursuing some external model of definiteness. Even if the model is one of our own devising, it is external to our true indefinite nature. One story in the Zhuangzi tells of a welcoming and benevolent but faceless emperor, Hundun. Hundun has a face drilled into him by two other emperors who already have faces of their own. As a result, he dies. The story suggests that fixed identity always comes to us from the outside, from others who have already attached themselves to fixed identities and drive us to do the same, not usually by drills but rather by example. This peer-driven attachment to identity kills our fundamental nature, which is formless and fluid like Hundun (his name, 混沌, means something like ‘mixed-up chaos’, and each character contains the water radical signalling fluidity). Our true Hundun-nature is the capacity to take on many forms without being finally defined by any of them. This encounter with a distant philosophical culture liberates us to ask questions we might not have otherwise thought to ask. Imagine if we hadn’t been conditioned to believe that our true self is something fixed and inborn. Would we have inevitably found our way to this idea? Or might we instead, like Zhuangzi, have supposed our true nature to reside in boundless suppleness and fluidity, upon which any definite identity can only be a foreign imposition? What would our culture look like if that was the dominant idea we all had of our true self? Would it reduce human life to a meaningless chaos of wandering without purpose? Or could it, perhaps, be more peaceful, more adaptable, and more exciting? I am inclined towards the latter position. Admitting my confirmation bias, I see examples everywhere of how identity holds us back. In a complex and unpredictable world, nations need more than ever to learn from each other. Instead they are closing their doors to foreigners and going into international dialogues with megaphone on and earplugs in. In modern democracies, people vote for who they are, not what they want, as Kwame Anthony Appiah puts it, leading to policies that pit identity groups against each other, rather than pursuing collective benefits – or indeed even real benefits to any one group. Information technology puts the whole world at our fingertips, yet people remain shockingly incurious about anything that goes outside their own narrow cultural sphere – as if fearful that exposure to too much difference will detach them from their treasured identity. And even when current patterns are shown to be unsustainable, we find it difficult to change them, due to our identities becoming somehow bound up in them. He doesn’t know if he is Zhuangzi, who dreamt he was a butterfly, or a butterfly now dreaming of being Zhuangzi A personal example of the latter is my experience of grief. As I slowly lost my father to Alzheimer’s disease, I realised that the terrifying part of grief is the feeling of not only losing a loved one but losing yourself. I struggled to imagine myself without a figure who shone out warmth in my earliest memories. I was stuck in a desperate, hopeless compulsion to yank back the past into the present. It was my father’s own courage in the face of his much more direct loss of identity that taught me that I too could adapt, learning to accept and even appreciate the complete transformation of myself. The most famous story of the Zhuangzi is the Butterfly Dream. Zhuangzi awakens from a dream, in which he was a butterfly fluttering freely. He doesn’t know if he is Zhuangzi, who dreamt he was a butterfly, or a butterfly now dreaming of being Zhuangzi. This story puts Zhuangzi in contact with ‘the transformation of things’ – a reality in which identities are always fluid and never fixed. As Kuang-Ming Wu points out in The Butterfly as Companion (1990), the butterfly’s ‘essence is fluttering’. Hundun and the butterfly are symbols of an inner fluttering – an inner indefiniteness that lies deeper in us than any fixed identity, whether chosen or imposed. The demand to be true to ourselves – as individuals, as communities, as churches, parties, cities, nations – leads us to manhandle the world. Keeping ourselves the same means keeping things in the right shape to provide the context for our self-definition. Others are trying to keep things in the right shape to suit their self-definitions. The result is a world full of strife and devoid of progress. Perhaps it is time to seek our inner fluttering instead. Source of the article

GOATReads: Literature

Stephen Dedalus in A Portrait of an Artist as a Young Man

His soul had arisen from the grave of boyhood, spurning her grave-clothes. Yes! Yes! Yes! He would create proudly out of the freedom and power of his soul, as the great artificer whose name he bore, a living thing, new and soaring and beautiful, impalpable, imperishable Throughout A Portrait of an Artist as a Young Man Stephen Dedalus is persistently portrayed as the outsider, apart from the society he and his family inhabit, connecting with no-one and seeking solitude and isolation at every turn. Does this self-imposed exile lead to or directly influence his artistic awakening or not? This essay will examine (both thematically and stylistically) Stephen's alienation from the traditional voices of authority in his life and explore how this impacts upon his budding artistic talent. A Portrait of an Artist as a Young Man was Joyce's first published novel, written in neutral Switzerland but published in New York in 1916. Europe was at war and Michael Collins had been taken prisoner during the Easter Rising in Dublin. This novel is therefore bound up with an Irish history rich in rebels and freedom fighters. A real history was raging in Joyce's homeland where the Fenians were fighting against English rule, the oppressive landlord system and eventually the Catholic church in hock to the English rulers. The novel, however, as the title suggests, is not a story of revolutionary politics but of the quiet but dogged rebellion of a young man in search of his artistic voice. From the opening pages the reader realises that this is no traditional narrative. There is no safe 3rd person distance from the main protagonist, the reader never escapes Stephen's perception of events. The style is direct and visceral and reflects, in its immediacy, the disjointed manner in which memories are recollected and thoughts enter the protagonist's imagination. The effect is claustrophobic but also highly instructive as the reader walks beside Stephen on his journey of self-discovery. The readers, discerning as they are, will groan at some of Stephen's poetry and mawkish ideas but they cannot deny that they are seeing what Stephen sees and experiencing his life first hand. Stephen starts as an object - Baby Tuckoo - in his father's story of his early years and is thus without his own identity. Later, at Clongowes, he is either gripped with embarrassment as he fails to connect with his peers or speechless at a family Christmas dinner as debate and anger rages around him. He is isolated, associating only with the sounds of words (belt, iss, suck) and other stimuli. He doesn't understand the schoolboy argot and his consequent victimisation is all too predictable as his peers react with typical schoolboy nastiness to a boy who doesn't fit in. In protesting his palm-whipping, however Stephen not only wins back the respect of his peers but also performs his first act of rebellion or independence. As a young boy, this apartness appears too real and solid to be something he just grows out of or learns to subsume and he turns to literature as a means of escape. It is no mere chance that Stephen enjoys The Count of Monte Cristo the story of choice for many schoolboys seeking escape from the imprisonment of school and the cruelty of their peers. Reality eventually encroaches upon Stephen's internal reveries and teenage angst, In a vague way he understood that his father was in trouble and this was the reason why he himself had not been sent back to Clongowes (p.64) Stephen's selfish detachment persists throughout the book as external events of great import to those who love him drift in and out of his consciousness without having any real direct impact. This is of course if his internal dialogue is to be believed. Thus we see Stephen isolated from his peers as his family struggles from one property to another and his father from pub to pub seeking work. He can find no real connection with his father. The distance between them can be seen when, having performed his act of rebellion at school concerning his unjustified palm-whipping, Stephen hears his father recalling a conversation with the Jesuit to whom Stephen protested, Shows you the spirit in which they take the boys there. O, a Jesuit for your life, for diplomacy! (p.73) The bluff manner in which his father refers to the incident couldn't be further from Stephen's own tortured experience. It is around this time also that Stephen commences a period of whoring. Whilst this sexual engagement with prostitutes requires no emotional attachment, this interlude, coming as it does at the conclusion of chapter 2, signifies the nadir of Stephen's path away from Jesuit and familial authority. But what of Stephen's artistic yearnings? At this stage there is no discernible development of a poetic voice but Stephen does feel some shadowy intimation of otherness or the transcendental world, A vague dissatisfaction grew up within him as he looked on the quays and on the river and on the lowering skies and yet he continued to wander up and down day after day as if he really sought someone who eluded him. (p.67) Stephen is in physical exile and whilst his family is cast adrift, he is groping for an artistic expression which eludes him. A particular scene, remembered during a conversation with his friend, portrays his directionless spirit, The old restlessness had again filled his breast as it had done on the night of the party, but had not found an outlet in verse. The growth and knowledge of two years of boyhood stood between then and now, forbidding such an outlet (p.77) It is the adolescent games of torment and humiliation coupled with a rigid Catholic approach to literary criticism (In any case Byron was a heretic and immoral too (p.81)) that prevents the genuine artistic outlet Stephen seeks. Visions remain formless and his isolation from his peers prevents him from relating to them, Do you use a holder? - I don't smoke, answered Stephen - No, said Heron, Dedalus is a model youth. He doesn't smoke and he doesn't go to bazaars and he doesn't flirt and he doesn't damn anything or damn all. (p.76) Were he able to voice these half-formed feelings amongst like-minded young men, perhaps he would feel less isolated. The strict Catholic nature of their education and the widening social gap between him and his peers brought about by his father's downfall cements his alienation and otherness so his artistic yearnings remain 'monstrous reveries' (p.90) without any real articulation or development. By the conclusion of chapter 3 Stephen is in a state of cold, lucid indifference, alienated from his schoolmates and lost in a world of meaningless sexual encounters. Yet by the start of chapter 4 he is well on his way back to a state of grace as he takes whole-hearted part in a Catholic retreat organised by Belvedere college. This passage of writing is tedious and its repetitive, didactic style reflects Stephen's utter immersion in the catholic faith. His fate is almost sealed as he is invited to take Holy orders at the conclusion of his devotions. Stephen is seduced at first but his by now instinctive resistance to any form of belonging again kicks in, At once, from every part of his being unrest began to irradiate (p.161) His decision to reject the priesthood is a serious one which Stephen follows with conviction, even refusing to perform Easter Duties for his mother. By now, Stephen is grasping his own destiny and taking positive steps towards a mature poetic voice. Thus the phrase, a day of dappled, seaborne clouds is taken by Stephen and woven into his own experience...The phrase and the day and the scene harmonised in a chord (p.166). These lines mark Stephen out as the young artist he has been aspiring to be, referring the poetic to the mundane in a structured and considered manner. At the same time he becomes aware of the symbolic nature of his surname and the mythical character from which it is taken. Dedalus was the great artificer and creator of Icarus' wings which were themselves a symbol of escape. As the Ovid quote at the start of the novel states it was Dedalus who...altered/improved the laws of nature. By the conclusion of chapter 5 therefore we see Stephen the creator who chooses exile rather than the daring Icarus-like youngster seeking escape but doomed to failure, His soul had arisen from the grave of boyhood, spurning her grave-clothes. Yes! Yes! Yes! He would create proudly out of the freedom and power of his soul, as the great artificer whose name he bore, a living thing, new and soaring and beautiful, impalpable, imperishable (p.170) Stephen's rejection of the environment that shaped him is now complete and his diary entries at the conclusion of the novel show a purposeful young artist seeking expression in Europe. Source of the article

GOATReads: History

Masters and Mamluks: Islam’s Slave Soldiers

The military elite of the medieval and early modern Muslim world consisted of men who had been captured and forced into service. But to what extent were the janissaries and their predecessors subject to slavery? ‘I see that those on my side have been routed. I fear they will abandon me. I do not expect them to return. I have decided to dismount and fight by myself, until God decrees what He wants. Whoever of you wishes to depart, let him depart. By God, I would rather that you survive than that you perish and be destroyed!’ They replied: ‘Then we would be treating you unjustly, by God! You freed us from slavery, raised us up from humiliation, enriched us after we were poor and then we abandon you in this condition! No, we will advance before you and die beside the stirrup of your horse. May God curse this world and life after your death!’ Then they dismounted, hamstrung their horses, and attacked. This is an excerpt from al-Tabari’s universal history describing the exchange between an emir (commander) and his military slaves and freedmen after the tide of battle had turned against them during the Abbasid civil war (811-819). The emir’s forces had fled and he was left only with his slaves, who refused to abandon their master even though he urged them to save themselves. This exchange exemplifies the loyalty of military slaves and freedmen, a characteristic that made them the most elite and reliable soldiers of the medieval and early modern Muslim world. Over a period of almost 1,000 years, military slaves and the institution of military slavery dominated premodern Muslim polities. From their rise to prominence in the early ninth century under the Abbasid caliph al-Mutasim (r. 833-842) to the disbandment of the Ottoman janissary corps in 1826, military slaves formed the elite core and backbone of almost every Muslim army. Military slaves also rose to positions of power that enabled them to dominate the politics, economics and cultures of the societies in which they lived. The beginning of a tradition During the early period of the Islamic empire, following the Prophet’s death in 630, armies were primarily composed of Arab warriors, with slaves and freedmen (often referred to as mawali or clients) serving mainly as bodyguards and retainers of the caliphs and commanders. The situation changed with the overthrow of the Umayyads by the Abbasid revolution in 750. Mawali, especially Iranians, started to play a bigger role in both the army and the administration. But it was not until the end of the Abbasid civil war between the sons of the caliph Harun al-Rashid, who died in 809, that the caliphate’s military was fully transformed, with a professional standing army of slave soldiers at its core. Al-Mamun, the victor in the civil war, had depended on a versatile and mobile cavalry army of eastern Iranians and Turks that defeated the much larger forces mustered against him by his brother, al-Amin. Al-Mutasim, al-Mamun’s other brother and successor, took things further and reformed the military during his reign. Even before his ascent to the throne, al-Mutasim created a private army primarily of Turkic slaves purchased from the Samanids, a semi-autonomous Iranian dynasty that ruled the eastern parts of the caliphate. The Samanids were in direct contact with the Turks who inhabited the steppe. The two sides often raided one another, resulting in large numbers of captive Turks entering the caliphate. Upon his accession in 833, al-Mutasim disbanded the old army, which had been dominated by the Arabs and Iranians, and removed them from the imperial payroll, relegating them to the role of auxiliaries. He replaced these troops with his Turkic slave soldiers, vassals from eastern Iran and mercenaries. Why? The need for reliable, loyal and skilled soldiers is one of many reasons that the rulers of the Muslim world adopted military slavery. The loyalties of the warriors who formed the early Muslim armies often lay with their tribes or the regions from which they hailed. Often, these men did not wish to leave their homes, land and families to go on long and distant campaigns. Furthermore, four major civil wars were fought during the first two centuries of Islam, which threatened to split the Muslim world along political, regional, tribal, factional and, later, sectarian lines. In these conflicts, acts of treachery and betrayal were common. It was during the civil war between his brothers, al-Amin and al-Mamun, that al-Mutasim witnessed first-hand how his older brother al-Amin was abandoned by his forces when the tide turned against him. Recruiting and training slave soldiers with tight bonds of solidarity to the master and to one another was the solution to this problem. Slave soldiers were usually from beyond the boundaries of the caliphate. They were either prisoners of war or purchased from traders, and many were acquired at a young age. They were educated, raised and trained in their patron’s household and became members of his family. As a result, strong bonds of loyalty were formed between the slaves and their master, and among the slaves themselves. Being foreigners, the slaves looked to their master for their pay, rewards and wellbeing; the master depended on his slaves to protect him and keep him in power. Imprisoned or purchased By the late ninth century, the caliphs had lost much of their power. Although the caliphate still existed, it had fractured into fragments ruled by autonomous dynasties. But these polities modelled their armies on the caliphate’s military, recruiting military slaves as elite soldiers on whom they depended. They came from varied ethnic backgrounds and included Turks, Greeks, Slavs, Africans and Mongols, their origins and numbers varying depending on their proximity. North African dynasties had larger numbers of African slave soldiers who were brought across the Sahara and Slavic slave soldiers, referred to as saqaliba, who were purchased from Frankish and Italian slave merchants from across the Mediterranean and the Iberian frontier. There were large numbers of saqaliba in Islamic Spain, and East Africans and Abyssinians found their way into the militaries of Syria and Iraq through the slave trade across the Red Sea and Egypt. Large numbers of Turks and others from the steppes of Inner Eurasia filled the ranks of the armies of the dynasties that ruled the region spanning Central Asia to Egypt. By the 16th century, the main sources of military slaves for the two most powerful Muslim empires, the Ottomans and the Safavids, were the Balkans and Georgia, respectively. Regardless of geographic proximity, most medieval Islamic polities sought to acquire Turkic slave soldiers. During the Middle Ages the Turks were seen as the most martial of all peoples and became the elite soldiers of most Muslim armies. They were considered a tough and hardy martial people, uncorrupted by civilisation and urban life. Hailing from the Altai region, Turkic tribes inhabited large portions of the Inner Eurasian steppes, which brought them into direct contact with the Muslims on their northern and eastern frontiers. These pastoralist nomads had to survive in harsh environments; their tribes raided one another for livestock and competed for grazing grounds. They also raided and fought the sedentary peoples around them. Turkic children learned to ride horses and use weapons, specifically the bow. As slave soldiers in the medieval Muslim world, in which mounted warriors dominated the battlefield, the Turks, therefore, served as elite heavy cavalry, forming the dominant strike force of most Muslim dynasties until the rise of the Ottomans and the creation of their elite janissary corps, composed of infantrymen. The Islamic institution of military slavery produced some of the period’s best soldiers. Upon being purchased, often at high prices, the slaves were attached to their master’s household. They underwent years of education and rigorous training, which included riding horses as individuals and in grouped formations, archery, using melee weapons, such as swords, lances, maces, daggers and axes, both on foot and on horseback, and wrestling on horseback. The acquisition and training of military slaves was expensive and involved investment in time. Most often it was only the ruling elite that could afford to recruit them in large numbers. Often, the slaves were emancipated upon the completion of their military training. Vocabulary of slaves There were several terms designating various types of slaves. One of the earliest, used between the ninth and 12th centuries, was ghulam, meaning ‘boy’ or ‘youth’. This is not surprising, because a large number of the slave soldiers were either captured or purchased when they were still young boys. This term was eventually replaced, by the late 12th and early 13th centuries, with mamluk, ‘one who is owned’. Both of these terms refer to a specific group of military slaves who were fair-skinned and fought on horseback. The terms abd, abid and sudan all refer to African slave soldiers, who were regarded as inferior to their mamluk counterparts. The saqaliba were military slaves of mainly Slavic origin, who served the Umayyad caliphate of Islamic Spain and some of the North African polities. The term kul was used in the Ottoman period to refer to the sultan’s slaves and means ‘slave’ or ‘servant’. Finally, kapi-kulu, meaning ‘slaves of the Porte’, referred to the household troops that formed the Ottoman sultans’ standing army and included the janissary infantry corps. Although the system of training military slaves on a grand scale was unique to the Muslim world, there were Iranian and central Asian practices that may have provided the foundation upon which the Muslims built. The Sogdians, an Iranian people who lived between the Amu Darya and Syr Darya Rivers in Central Asia (in modern Tajikistan and Uzbekistan), were heavily engaged in trade with both the east and the west. They gathered children and trained them as military slaves to defend their city states and protect their caravans. The Sasanians, the last great Iranian empire before the Islamic conquests, also enlisted prisoners of war and slaves into their military, whom they settled on their frontier regions, which they defended in exchange for land and pay. Incidentally, it was the Samanids, a semi-independent Iranian dynasty ruling the eastern parts of the caliphate (including Sogdia), who first created a corps of Turkic slave soldiers. It may have been their sojourn in the east during the Abbasid Civil War that prompted both al-Mamun and al-Mutasim to adopt this tradition of training and using slave soldiers. What made Islamic military slavery different was its institutionalisation, the slaves’ elite status within society, and their proximity to, and influence on, the central ruling powers. There were other societies that used slaves for war during various eras. The Spartans sometimes mobilised the Helots, their servile population, for war. Herodotus claims that there were Helots among the Greek casualties of Thermopylae and that at the Battle of Plataea every one of the 5,000 Spartan hoplites was accompanied by seven lightly armed Helots. The Romans enlisted large numbers of slaves to replenish the ranks of their legions after suffering several defeats by Hannibal during the Second Punic War. The European colonial powers also recruited slaves in their colonies during times of war in the Americas and Africa. Slaves also participated in the fighting during the American War of Independence and the Civil War. But in the Muslim world, slave soldiers formed socio-military elites and, in some cases, even rose to form the ruling class. Slaves were enlisted into the military in other societies during emergencies, such as civil wars and after military defeats that left the ranks of the regular army depleted, or when there was a shortage of manpower but they had little or no social standing or influence. Wealth and social mobility  Unlike slaves in other societies, military slaves were paid handsomely for their services. They received stipends and salaries from the central treasury. Military slavery was also one of the means through which one could acquire upward social mobility in an age when climbing the social ladder was rare. The most intelligent, promising, loyal, brave and capable military slaves were promoted to become officers and generals in the army, to government posts and to positions in the ruler’s household and inner circle. Posts such as royal arms bearer, cup bearer, holder of the royal inkwell, keeper of the hunting dogs, stable master, master of the hunt and chamberlain may not seem impressive, but they were all held by senior officers. Such positions indicated the slaves’ proximity to the master, the intimate relationship they shared and the trust that the patron had for the men who served him. Most of the Ottoman viziers were the brightest cadets selected from among the boys collected for the janissary corps. Slave soldiers who were promoted to become officers in the army and government officials became wealthy and powerful. In addition to receiving their pay they were often given parcels of land from which they drew an income. They amassed huge amounts of wealth in the form of gold, land, palaces, horses and livestock. These commanders, who had started their careers as slaves, then recruited slave soldiers of their own. Some slave generals grew so powerful that they were able to challenge their masters and, in some cases, overthrow them and establish empires and dynasties of their own. There are several examples of slave soldiers turning on their masters. In 861 the Abbasid caliph al-Mutawakkil was murdered by his Turkic guards while drinking with some of his companions. The Turks were members of the army that his father, al Mutasim, had created. In the east, Alp Tegin, a Turkic slave of the Samanids, had grown too powerful for his master’s comfort. When the Samanid prince divested him of his rank and possessions and sent an army to arrest him in 962, Alp Tegin fought his master and defeated the force sent against him. He then fled to and conquered the city of Ghazna (in modern Afghanistan) with his own slave soldiers. From there he and his successors created the Ghaznavid Empire (977-1163) that eventually swallowed up the domains of their former masters. Similarly, in 868 Ahmad ibn Tulun, a member of the Abbasids’ Turkish guard, was sent to Egypt as its governor, but he took complete control of the treasury and created a new army that was loyal to him. He and his descendants managed to maintain their independence until 905. With the collapse of the Umayyad Caliphate of Cordoba in Islamic Spain in 1031, several successor states emerged, known as the Taifa. A number of these principalities were established and ruled by the saqaliba and included the Taifas of Valencia, Denia and Almeria. Perhaps the best example of military slaves in power is the Mamluk sultanate of Egypt and Syria, established in 1250, which lasted until 1517. The last effective Ayyubid sultan, al-Salih Najm al-Din Ayyub, created a new army composed primarily of mamluks after he rose to power. He had previously been betrayed by his troops during his struggle with other family members for the throne and it was only his mamluks who remained loyal to him. He treated them well, paying them handsomely and promoting many to high positions. When he died, his successor, Turanshah, did not share his father’s affection and he made it clear that he was going to disband them and have their leaders killed. Upon learning of the new sultan’s intentions, the mamluks killed him and established their own regime, rather than returning to their homelands. They continued to refer to themselves as mamluks, which they considered more honourable than being a mere freeborn subject of the caliph. Loyalty paradox Although the institution of military slavery produced excellent and loyal elite soldiers it had its weaknesses. Loyalty did not necessarily pass on to a ruler’s successor, who was sometimes deposed and killed. Successors who managed to establish themselves on the throne often purged their predecessor’s slaves, replacing them with their own. The Ottoman case was exceptional, because the army was loyal to the dynasty and not to individual sultans. Riots, mutinies and rebellions were common, the main trigger being late or unforthcoming pay or mistreatment. Mardavij ibn Ziyar, a Northern Iranian prince, soldier of fortune and the founder of the Ziyard dynasty, for example, was murdered by his Turkish slave soldiers due to his mistreatment of them. Similarly, the great Mamluk emir, Yalbugha al-Umari, was murdered at the peak of his power in 1366 because of his harshness and the severe punishments he meted out to those who fell short of his expectations. Another weakness of the institution of military slavery was manpower and costs. That most military slaves were foreigners, as well as the time and money it took to train them, made them valuable and costly assets which were difficult to replace. Military slaves got married and had families; however, their descendants were born free as Muslims. Having grown up in the towns and cities of the Muslim world, they were viewed as being less martial than their fathers and not suitable to replace them. Fresh tough and ‘uncorrupted’ recruits were preferred, brought in from the steppes or mountainous regions such as the Caucasus. Battlefield dominance  Despite the weaknesses of military slavery, the institution produced some of the best soldiers of the medieval and early modern periods. The performance of the ghulams, mamluks and janissaries on the battlefield are a testament to their superiority over most of their counterparts. Mahmud of Ghazna, the greatest of the Ghaznavid sultans, launched several campaigns into what is now Pakistan and northern India between 1001 and 1024. His forces were almost always heavily outnumbered, but they were superior in training and equipment. At the Battle of Manzikert in 1071, it was the Seljuk sultan Alp Arslan’s heavy ghulam cavalry that dealt the death blow to the Byzantine army after it had been weakened by skirmishing light cavalry. The Mamluk sultanate’s army, composed predominantly of mamluk soldiers, defeated the hitherto undefeated Mongols, halting their westward advance at the Battle of Ayn Jalut and subsequently defeated four other much larger Mongol invasions of their territory. The Mamluks also defeated Louis IX’s Seventh Crusade and put an end to the Crusader states in the Levant. Slaves or not? Were these soldiers slaves as we understand the term? It is true that many practices in the Muslim world fit our understanding of slavery; military slavery is not one of them. Until the proliferation of effective gunpowder weapons in the late 16th and early 17th centuries, they dominated battlefields and rose to dominate the armies, politics and societies of the regions where they were employed. Some of the wealthiest and most powerful individuals in Muslim societies were military slaves who had risen to become generals, governors and ministers. In some cases, they even rose to be princes and sultans and ruled in their own right. Slaves recruited through a military institution became a political and social elite, which dominated and ruled the Muslim world for much of its history. Source of the article