Verify it's really you

Please re-enter your password to continue with this action.

Posts

GOATReads: Psychology

When Harmony Hides Loneliness

What China reveals about belonging and connection. "Good relationships are those where people remember your needs without you asking." This definition of connection from a Chinese research participant captures something profound about how relationships work in China. Connection isn't performed through declarations of affection or scheduled check-ins—it's demonstrated through attentiveness to unspoken needs, through the quiet knowing that comes from deep familiarity. But what happens when the conditions that make such knowing possible—shared history, geographic proximity, generational continuity—are disrupted? Our eight-country investigation of social connection (N=354 across Brazil, China, India, Morocco, the Philippines, Turkey, the United States, and Zimbabwe) included interviews with 20 Chinese participants across different ages, regions, and loneliness levels. What emerged challenges the Western assumptions about loneliness as primarily a relationship problem. In the China interviews, place-based belonging and continuity emerged as a major dimension of how connection and loneliness were experienced. The Geography of Belonging When Chinese participants described feeling disconnected, they didn't just mention missing people. They talked about missing where they're from. "First of all, it is my hometown," one participant explained. "I have been in my hometown for 19 years, so I have a very deep sense of connection to my hometown, and I usually pay attention to news about my hometown when I read it." Another described returning to their university campus years after graduating: "When I saw the original teaching building and dormitory building, I felt extremely affectionate. I think it might be because seeing these objects reminded me of people and moments from the past—I was nostalgic for those earlier times." This isn't mere nostalgia. One participant offered a useful distinction: loneliness involves both gu (isolation from one's roots) and du (being alone). You can be surrounded by people yet profoundly lonely if severed from your geographic and ancestral anchors. In southeastern coastal regions, entire villages constitute interconnected clans sharing surnames and ancestry. These bonds are continuously reinforced through ancestral worship rituals, Spring Festival reunions, and communal practices at ancestral halls. When migration for work requires leaving these contexts, something more than "social ties" is lost—an entire framework of belonging disappears. The Festival Test The depth of place-based belonging becomes most visible during festivals. Multiple participants mentioned feeling acute loneliness, "as the Spring Festival approaches and I haven't been home for a long time." One participant emphasized: "In our country, even if the atmosphere in the original family is not good, people still need to return to their hometowns during specific holidays like the Spring Festival." These aren't optional social gatherings. There are moments when ancestral continuity, family obligation, and geographic rootedness converge. Missing them doesn't just mean missing a celebration—it means being cut off from the very mechanisms that constitute belonging in Chinese culture. Another participant explained it this way: "In China, when there are elderly family members, children will sit down for a meal together during the New Year—no matter how infrequently they've contacted each other." The obligation transcends current relationship quality. It's about maintaining the unbroken chain. Silent Suffering Perhaps most striking: Chinese participants rarely discussed loneliness openly, even when experiencing it intensely. "I think loneliness is quite abstract, and people often don't know how to describe it in words," one explained. Another pointed to deeper cultural barriers: "As I said, many people instinctively believe that loneliness is something negative. They associate it with a lack of social skills or a flaw in one's personality." In a collectivist society that values group harmony and social connection, admitting loneliness suggests personal failure. One participant noted: "I feel like in China, people often label others as 'lonely,' which is kind of disrespectful." The stigma creates a vicious cycle. Loneliness stems partly from the inability to communicate effectively with others. But if loneliness itself prevents disclosure, help-seeking becomes impossible. As one participant observed: "Loneliness itself stems from a state of being unable to effectively communicate with those around you." This silence has consequences. Most participants described loneliness psychologically—as emotional absence, helplessness, emptiness. But a subset also reported physical spillovers: constant illness, exhaustion described as "like my batteries are permanently dead," disrupted sleep and eating patterns. Yet formal help-seeking remains rare. Only one participant mentioned wanting to see a psychologist—but found it "too expensive." When Relationships Remain Superficial Paradoxically, China's emphasis on social connection can itself produce loneliness. Multiple participants described a contradiction: lots of social activity, little genuine connection. "In China, social connections are often superficial, with everyone wasting large amounts of time and energy on social connection performance, which in turn deepens people's sense of loneliness," one explained. Another described China's elaborate drinking culture at banquets: "I think many people actually feel lonely at such banquets, as if forced to learn a lot of drinking table rituals." A third noted: "In fact, in our country, there's often a superficial hustle and bustle—like during the Lunar New Year, when it looks like everyone is reuniting, but in reality, there isn't much meaningful communication." The performance of connection—showing up, following rituals, maintaining appearances—doesn't guarantee its reality. This matters because interventions focused solely on increasing social contact may miss the deeper issue. Beyond Human Connections Like our Indian participants, Chinese participants also revealed something Western loneliness research often overlooks: People feel connected to more than just other humans. When asked what else makes them feel connected, pets emerged most frequently—often described as providing stronger bonds than family members. One participant shared: "I have a lot of examples around me, like one of my best friends, she got a puppy about a year ago, and she has a very strong connection with her puppy... when she goes to study abroad now, she makes special trips home because of her puppy, not her family." Nature also figured prominently. And places—especially hometowns and cities visited during travel—evoke powerful feelings of connection tied to memories and experiences. What This Means for Addressing Loneliness China's experience reveals that effective loneliness interventions must account for cultural frameworks of belonging. In contexts where connection is rooted in place, ancestry, and ritual participation: Measurement tools should incorporate place-based belonging. Standard loneliness scales asking about relationship quality miss the geographic dimension central to the Chinese experience. Interventions should recognize migration's unique impact. Moving for opportunity doesn't just mean missing people—displacement and inability to participate in return rituals trigger loneliness even when people remain socially connected in other ways. Creating acknowledgment spaces matters. In cultures where loneliness carries shame, safe forums for honest discussion become crucial—but these must be culturally adapted, not imported wholesale from individualist contexts. Festivals and rituals aren't optional. Policies affecting when people can return home (work schedules, travel costs, migration restrictions) directly shape loneliness by determining access to the very occasions that reinforce belonging. The solution isn't more socialization. If connection is already performative and superficial, adding more social events intensifies the problem. The question is how to create conditions for genuine rather than ritualized connection. The Bottom Line Behind China's cultural emphasis on harmony and social connection lies a more complex reality: rapid modernization and migration have disrupted the geographic and ancestral roots that traditionally anchored belonging. The result is a distinctly Chinese pattern of loneliness—often silent, deeply tied to place and ritual participation, and sometimes masked by outward social performance. Every country has its own architecture of belonging. Addressing loneliness effectively means understanding and respecting these structures rather than imposing universal solutions. For China, that means recognizing that the migrant worker missing Spring Festival at home isn't just lonely for family—they're experiencing a rupture in the place-based continuity and ritual participation that makes belonging possible. Source of the article

GOATReads: History

What Can We Learn From Apocalyptic Times of the Past?

More than a millennium ago, a Maya community collapsed in the face of a devastating drought. One writer joined an intrepid archaeologist to upend what they thought they understood about why it all happened Ancient glyphs, forgotten temples, abandoned cities decaying in the jungle: The collapse of the Maya Empire is one of archaeology’s most notorious apocalypses. It’s also been presented to us as one of the most mysterious, and the most frightening. Millions of people seem to simply disappear from the archaeological record, in one city after another, during a time of devastating drought. It sends shivers down our spines—and tourists flocking to the ruins. But as I researched the Classic Maya (250-900 C.E.) collapse for my book Apocalypse, I came to think this was the past apocalypse that most resembles the intertwined crises we’re facing today. The way it turned out shouldn’t scare us. It should inspire us to take the future into our own hands. Whether it’s driven by human or natural forces, or a combination of both, apocalypse is a rapid, collective loss that fundamentally changes a society’s way of life and sense of identity. Archaeologists use objects, buildings and bones to reconstruct these cataclysmic events and the cultures and people they affected. They can see what happened before a world-shattering event, and they see what happened after. They can see the trends that made a society vulnerable, and they can see how survivors regrouped and transformed. For archaeologists, the most important piece of the story is not the apocalypse itself but how people reacted to it—where they moved their settlements to escape, how they changed their rituals to cope, what connections they made with other communities to survive. That’s where they find countless stories of resilience, creativity and even hope. And that’s what I saw in the Maya world, especially in a mysterious ancient city called Aké. Aké is less than an hour’s drive from Mérida, the capital of Mexico’s Yucatán State, but it feels like a world apart. During a recent visit, even the archaeologist Roberto Rosado-Ramirez, who has been working in Aké for 20 years, worries that he missed the turnoff. But soon we are on the correct one-lane road, driving through a tunnel of green formed by tree branches arching overhead. We’re visiting during Yucatán’s rainy season, when the region’s scrub forest bursts to life. Pendulous, teardrop-shaped birds’ nests dangle from the branches, and kaleidoscopes of marigold butterflies flutter ahead of our car. An iguana lounges in the middle of the road, unperturbed by passing bike and motorcycle traffic. After about 15 minutes, the forest gives way to the town of Aké. Signs of its past are everywhere. The ruins of a Maya pyramid sit near first base of a community baseball field in the center of town. Nearby, in the core of the restored archaeological site, another pyramid is topped with an unusual array of ancient columns. The enormous stones used to build that pyramid indicate it may have been constructed as early as between 100 B.C.E. and 300 C.E., Rosado-Ramirez tells me. A full-blown city grew up around the sacred monument between the years 300 and 600 C.E. Aké reached its height sometime between 600 and 1100, with a population of 19,000. It was economically prosperous, connected to extensive trade and diplomatic networks, and the religious heart of its community. During his first years in Aké, working on excavations led by Mexico’s National Institute of Anthropology and History (known by the Spanish acronym INAH), Rosado-Ramirez studied Aké during this time, known as the Classic period. Events ranging from market days to religious festivals happened in the city’s plaza, which was kept meticulously clean. The spirits of gods and ancestors were believed to inhabit the temples surrounding it, and the succession of divine monarchs who lived alongside them were responsible for keeping Aké—and the rest of the Maya world—in working order through rituals (including sacrifice and bloodletting) that placated the deities and diplomacy that managed Aké’s relationship with other city-states, near and far. Merchants vied with the nobility for wealth and power, established trade routes, and facilitated the transport of goods over long distances; people moved from place to place with relative ease. That would all change—at first slowly, and then quickly—as an apocalypse rippled up from the south. At first, the people of Aké would have heard rumors of faraway warfare, in what is now the Petexbatun region of Guatemala, 300 miles south of Aké. In the mid- to late 700s, when Aké was at its apogee, nearby cities and towns suddenly turned themselves into walled fortresses, and years of excavations led by archaeologists from Vanderbilt University found evidence that some of the region’s most impressive buildings, even inside the fortifications, were attacked and destroyed. By whom, and why, remain mysterious. The region had plenty of water and doesn’t appear to have faced another environmental challenge at the time. Until the war began, the region’s population was stable, and there’s no evidence of foreign invasion or attempted conquest. Perhaps the low-level tension that typically simmered between Classic Maya city-states boiled over into outright and widespread warfare for reasons we might never know. But by 830, all the cities in the region had collapsed, and their leaders were killed or had fled. Over the next century or so, the apocalypse slowly spread and consumed once powerful Maya cities such as Tikal, in northern Guatemala, and Calakmul, in the southern part of the Yucatán Peninsula. In the 1990s, a sediment core extracted from Lake Chichancanab provided the first paleoclimatic evidence of a severe drought at the end of the Classic period. In the years since, researchers have gathered many more paleoclimate records from lakes and caves, which show drought likely hit these southern Maya cities during the ninth century, which perhaps amplified the political instability, in a classic apocalypse pattern. Perhaps, with all the conquests, alliances and intermarriages, the elite class had grown too large and unwieldy for common farmers to support in a time when food supplies were dwindling, Rosado-Ramirez suggested to me. And maybe those intertwined political and environmental pressures led to a popular revolution in philosophy and religion that undermined the justifications for monarchy, evidence of which wasn’t preserved as well as portraits of god-kings carved in stone. As the divine kings of the south fell, their cities did, too. Without the need to glorify all-powerful rulers, the construction of new buildings and monuments stopped. Because Maya artists often engraved their work with precise dates, archaeologists can see when a city’s last monument was erected, a final gasp of proven occupation before a site’s abandonment. In a handful of Classic Maya cities, most famously Cancuén in Guatemala, archaeologists have found mass graves that some think hold the remains of the former nobility who were massacred in the political transition. But in most places, it appears that the elite class simply left once their hold on power slipped. A city without a divine king didn’t have much work for priests or artists, and so they followed. Merchants, especially those who traded in elite and exotic goods, would have made their way to better markets or returned to their homelands as they waited for the next opportunity. Commoners, mostly farmers with household fields, probably would have been able to hold out the longest, if they wanted to. But eventually most people left the old cities, and they fell into ruin. Whatever news of these crises arrived in Aké—and it must have, especially once refugees started pouring out of the south—its residents may very well have felt protected by their geographic and cultural distance from the chaos. The northern part of the Yucatán Peninsula had always been the driest part of the Maya world, and so its residents would have been used to living with a certain amount of water stress. They would have known how to collect rainfall and conserve the water levels of their artificial reservoirs and natural cenotes, skills that perhaps served them well at the beginning of the dry period. Gradually, however, the strife crept closer and closer. Sometime between 900 and 1000, Rosado-Ramirez found, Aké built a wall that enclosed the temples and palaces of the city center and ran right across the 20-mile-long limestone road that had long connected it to the city of Izamál, practically and symbolically cutting off Aké from the wider Maya world. Perhaps the wall was meant to protect against attacks from the ambitious Itzá people, who were consolidating power in their capital, Chichén Itzá, or maybe Aké’s leadership wanted to exert more control over immigration as more and more refugees wandered a land struggling to produce enough food to feed them. Still, Aké remained a flourishing city through the tenth century. Even as political power shifted, the climate fluctuated and the wall went up, the apocalypse still hadn’t quite arrived. In a haunting echo of how many of us experience climate change today, it was always happening somewhere else, to someone else—until suddenly, it happened to them, too. An even more severe and prolonged drought hit between 1000 and 1100, overwhelming even the northern cities’ water infrastructure and management. By the end of that century, Aké and the rest of the cities in the northern Yucatán Peninsula had collapsed and was largely abandoned. Just as had happened in the southern cities a few centuries earlier, the elites left first, and most of their subjects dispersed in their wake. As cities collapsed, old identities and territories dissolved. Architecture and art would never reach the same monumental heights, nor would they ever require the same amount of labor. Ancient trade routes disintegrated and re-formed as shadows of themselves. Scribes and sculptors, for the most part, stopped carving significant dates into stone, giving archeologists the eerie impression that Maya history had ended. It hadn’t, of course. Most Maya people moved from cities to villages, many along the peninsula’s coast, after there were no more inland cities to try to make a life in. But thatched houses are far more ephemeral than stone pyramids, Rosado-Ramirez points out, and material evidence of many of these post-apocalyptic communities decayed long ago. Information about how and where most common Maya people remade their lives in a post-apocalyptic world has remained so hard to find that past generations of archaeologists didn’t bother looking for it—or perhaps they couldn’t even imagine there was anything there to find. In Aké today, modern houses abut the remains of ancient ones, which now take the form of small mounds on the otherwise flat landscape. Many of the town’s men work in a crumbling factory building where for over a century they—and their fathers and grandfathers before them—have processed henequen, a local succulent, into some of the world’s strongest rope. Part of the factory’s roof caved in during a hurricane in 1988, and the damage was too extensive to repair, but about half of the building is still accessible. The equipment inside is so old that replacement parts are occasionally purchased from owners of historic henequen machines otherwise on display as museum pieces. When I arrive with Rosado-Ramirez, his old friend and collaborator Vicente Cocon López is there to meet us at the baseball field with his son Gerardo Cocom Mukul. Cocon López is Rosado-Ramirez’s closest friend and collaborator in Aké. Rosado-Ramirez has known Cocom Mukul, now in his 20s, since he was in elementary school; he’d been working as the project’s unofficial photographer for many years. The four of us climb the enormous stairs to the top of the ancient pyramid arrayed with columns, now cleared of vegetation and restored to something like its former glory thanks to INAH’s restoration efforts. The ascent requires taking huge steps that leave me short of breath. The pyramid was clearly designed to be both accessible and imposing. Cocon López, who has an impressive eye for construction techniques both ancient and modern, makes sure I notice how all the enormous stones in the staircase are roughly the same size—an indication of the incredible amount of labor and skill its ancient builders brought to it. For many years, archaeologists thought Aké had been completely abandoned after the Classic period collapse. But after years of getting to know the current inhabitants of Aké, that interpretation no longer sat right with Rosado-Ramirez. All of his training as an archaeologist had taught him to focus on monumental architecture like pyramids. But if he applied the same rule to present-day Aké, he realized he’d be forced to conclude that the town had been abandoned decades earlier, when the henequen factory stopped being maintained. And, of course, that wasn’t at all what had happened. So Rosado-Ramirez started trying to see ancient Aké through the lens of modern Aké. Had everybody left after the apocalypse, or did it remain the home of a different kind of community? If anyone could help him find more evidence of post-apocalyptic construction, it was Cocon López, who knew the site better than anyone and was skilled at spotting subtle variations in building patterns and materials. Without the backing of a grant or a government-funded excavation, Rosado-Ramirez started trekking out to Aké on Saturdays to walk the site with Cocon López and see what they could find. “We didn’t even have money for water,” Rosado-Ramirez remembers. Cocon López scouted the site during the week, often following old metal tracks laid down during the boom days to move henequen in horse-drawn carts, and drew maps to guide their joint exploration on the weekends. In jumbles of old stones that, to me, are barely legible as the remains of buildings, Cocon López could see the entire timeline of old Aké and how later people interacted with and repurposed what came before. He and Rosado-Ramirez found small structures built with a mix of the huge stones of Aké’s earliest urban phase and smaller stones that came later. With his local team’s help, Rosado-Ramirez identified the remains of 96 small houses within the monumental core of old Aké and excavated 18 of them. These buildings, most often found clustered together in groups of six or so around a shared patio, were filled with ceramic styles popular during the Postclassic period of the 10th to 15th century, and they occupied locations—including the city’s once pristinely empty central plaza—where commoners would have never been allowed to live before the collapse. Perhaps they had moved inside Aké’s wall for protection during an unstable time, or maybe they simply found the old border convenient for defining and unifying their now much-smaller community. Rosado-Ramirez estimates that between 170 and 380 people continued living among the ruins of Aké during the Postclassic period. The upper end of that estimate is almost exactly how many people live in Aké today. For over 400 years, this smaller, more egalitarian, more flexible and more resilient Postclassic lifestyle worked for the Maya people of the Yucatán Peninsula. Even after the drought ended and the environment stabilized, they never again agreed to the rule of a divine king. They had tried living in that kind of ultra-stratified complex society, and it had failed them, catastrophically. It made their cities vulnerable, their politics fragile and their religion powerless when apocalypse struck. Why would they take that kind of risk again? Instead, the Maya of the northern Yucatán Peninsula built a different kind of capital, starting around 1100. Mayapán wasn’t the seat of a god-king but rather the meeting place for a confederacy composed of representatives from powerful families and polities across the peninsula. Isotopes in the bones of people buried there show that the city attracted people from far and wide, both commoners and elites. Perhaps that’s where the rulers of Aké ended up after they fled: as junior members of Mayapán’s council. Like Aké at the end of its first phase of life, Mayapán built a city wall. Unlike Aké, its enclosed center was densely occupied with both monumental buildings and commoner neighborhoods. Mayapán was home to several imposing pyramids, but no single central plaza like the one in front of Aké’s column-topeed pyramid. No one person or group controlled Mayapán, and so they didn’t need a massive public space where their followers could gather to hear their proclamations. Politics happened in the compounds of the council’s most powerful constituencies, and religious practices were more personal, with altars and incense burners inside people’s homes. Without the need to glorify individual rulers, scribes wrote books instead of carving history into stone monuments, and architects designed pyramids that were much easier to build. Unlike in ancient Aké, people in Postclassic Mayapán didn’t need to spend their time shaping and precisely arranging huge stone blocks into elegant, perfect staircases and palace walls to comply with royal tastes. Instead, they built pyramids from jumbles of smaller stones, which were then covered in stucco and painted with colorful murals. Most of those murals have long since faded away, exposing the stone foundations beneath. When Rosado-Ramirez took me to visit these ruins centuries later, Mayapán’s monumental architecture looked messier and less refined than its Classic period counterparts. But at the time, it would have been just as beautiful and imposing, albeit in a different way. Then, around 1400, drought struck again, and Mayapán, too, collapsed. According to surviving Maya texts, it was largely abandoned by around 1441. Its confederacy and council government had likely been a reaction to the inequality and failure of divine kingship, but the Maya of the Yucatán Peninsula were now learning that any kind of centralized power, even when it was shared, could succumb in the face of an environmental challenge that required the kind of flexibility and adaptation that only smaller communities were capable of. The people of post-apocalyptic Aké likely felt the stress of this new drought, but they didn’t have to leave their homes or remake their lives. They were already living in the safest, most adaptable way they knew. But their past was anything but lost or forgotten. People rebuilt and reimagined their communities with the help of everything their ancestors left behind, while also, perhaps, vowing not to repeat their mistakes. We don’t yet have the kind of perspective on our own time that archaeologists have on the Classic Maya collapse. They can see the whole story, while we are still in the midst of ours. But the way Maya people reinvented themselves and their societies gives me hope. They carried forward what served them and left behind what didn’t, especially the concept of divine kingship and its resulting inequality. What if we learned to see the Classic Maya collapse not as a terrifying story of a mysterious people who vanished in the face of environmental catastrophe, but as fertile ground for an exciting and necessary transformation that changed how its people saw themselves forever? Aké, Mayapán and hundreds of other Postclassic Maya communities are not cautionary tales. They are much-needed examples of adaptation and reinvention—not despite the apocalypse, but because of it. Source of the article

Phantasia

Imagination is a powerful tool, a sixth sense, a weapon. We must be careful how we use it, in life as on stage or screen When I performed in the play The Iceman Cometh (1946) by Eugene O’Neill, I played a character who stands up near the end and pours his heart out on stage. My character was almost like a messenger in a Greek tragedy but, instead of describing some nightmarish battle, he had to recount the horror of his own failures and the regrets of his life. It was an intense, emotionally draining experience, and I had to do it night after night. Each night I wondered if I could do it again, but somehow the energy of the room, the other actors and the story itself helped me to dial in some deep emotional frequency from my own history. It feels like you’re a shaman because you kind of lose yourself and channel something. And that activates deep emotions in the audience, too. So there’s a weird connection – I’m losing myself, and the audience is losing themselves. Then we come down together, having shared something powerful. – Paul Giamatti Like other artists, the actor is a kind of shaman. If the audience is lucky, we go with this emotional magician to other worlds and other versions of ourselves. Our enchantment or immersion into another world is not just theoretical, but sensory and emotional. How do actor and audience achieve this shared mysterious transportation? This shared ritual draws upon a kind of sixth sense, the imagination. The actor’s imagination has gone into emotional territories of intense feeling before us. Now they guide us like a psychopomp into those emotional territories by recreating them in front of us. Aristotle called this imaginative power phantasia. We might mistakenly think that phantasia is just for artists and entertainers, a rare and special talent, but it’s actually a cognitive faculty that functions in all human beings. The actor might guide us, but it’s our own imagination that enables us to immerse fully into the story. If we activate our power of phantasia, we voluntarily summon up the real emotions we see on stage: fear, anxiety, rage, love and more. In waking life, we see this voluntary phantasia at work but, for many of us, the richest experience of phantasia comes in sleep, when the involuntary imagination awakes in the form of dreams. In The Descent of Man (1871), Charles Darwin writes: The Imagination is one of the highest prerogatives of man. By this faculty he unites, independently of the will, former images and ideas, and thus creates brilliant and novel results … Dreaming gives us the best notion of this power; as Jean Paul [Richter] … says, ‘The dream is an involuntary art of poetry.’ The dreaming brain isn’t aware that the monster chasing us is unreal. During REM sleep, your body is turned off by the temporary paralysis of sleep atonia, but your limbic brain is running hot. In waking life, the limbic system is responsible for many of the basic mammalian survival aspects of our existence: emotions, attention and focus, and is deeply involved in the fight-or-flight response to danger. The dreaming brain isn’t just faking a battle but actually fighting one in our neuroendocrine axis. That’s why we sometimes wake up sweating with our heart racing. The involuntary imagination of dreaming creates an episode of emotional reality – not sham emotions. The same is true in the theatre, the movie, the novel. We’re really stirred by the St Crispin’s Day speech in William Shakespeare’s Henry V, really terrified by Edgar Allan Poe’s short story The Pit and the Pendulum (1842), and really haunted by Andrei Tarkovsky’s film Stalker (1979). The intensity of these emotions is not just felt by the audience. For an actor, embodying a scene with another actor – who reveals, say, a deep vulnerability from losing a child – can mean that a scripted fiction enacted by two strangers on a stage actually bonds the actors themselves in real intimacy long after the play or film is over. Like in a dream, the limbic mind experiences art as real. An actor or writer embodies the deepest traumas and joys of life so the audience can experience them vicariously. Acting (and other collective artistic work) can be a kind of mainlining of intimacy, and the audience partakes of this intimacy too. There’s a lot of subtle embodied communication going on. There’s an intense awareness between the actors themselves, and between the audience and actors – especially in theatre. The most obvious feedback happens in comedies of course, because you can hear the laughs or the lack of them. But much more subtle stuff is happening too. Once, when I was playing Hamlet, there was an early scene with the Player King. His prop beard was slowly falling off his face – unbeknown to him­ – just as I was saying a line about beards. And there was this amazing energy in the whole place from the collective recognition that we were all playing in a play, but also a play that knows it’s a play. And sometimes when something goes wrong on stage – like a mistake, or a prop thing – it actually brings in a fresh energy by breaking the normal patterns, and everyone becomes more present in the room. At other times, the emotional awareness is more intimate. Once I was playing the husband opposite the actor Kathryn Hahn in a scene where another character is inadvertently saying something insulting to her, and she doesn’t know what to say in response, and I’m trying to sort of cover it over, and then we just share this quiet moment together as we listen to the other character continue talking. They shot the scene many times, but then after one particular take we both looked at each other and said: ‘Wow, I really felt that one.’ And I think the authenticity of these kinds of connections can shine through to observers. For example, I think that was the take the director eventually used as well. To prepare a role, the actor must function as an empathy sponge: they work to ingest and ingurgitate all the social nuances of power, vulnerability, hope and despair. This is a sensory osmosis – the actor must cultivate this like a sixth-sense organ. It happens ‘in the dark’ of the mind so to speak, beneath the radar of conscious thinking. Nor does this rely on direct observations of human behaviour alone. According to the extended mind theory, humans offload much of who they are into the environment. The philosophers David Chalmers and Andy Clark argue that our minds don’t reside exclusively in our brains or bodies, but extend out into the physical environment (in diaries, maps, calculators and now smartphones, etc). Consequently, you can learn a great deal about someone by spending time in their home – not deducing facts like Sherlock Holmes, but absorbing subtle understandings of character, taste, temperament and life history. When an actor prepares to play a historical figure, he might find deep insights in the extended mind – the written record, the physical environs, the clothing and so on. A small detail can turn the key and open up a real ‘visitation’ from the past. When I played President John Adams in the 2008 miniseries for HBO, I studied many historical records, but the key that helped me find his character was an amazing compilation of his health complaints. Someone had culled all his letters for any references to his health, and produced this giant record of elaborate and hypochondriacal health complaints. The man was a wreck with digestive problems, toothaches, headaches, bowel troubles and more. After manic periods of high energy, he would ‘take to his bed’ for a couple of weeks. In reading all this, I began to see how to play the everyday John Adams. This capacity to get inside the emotional landscape of another person draws on a deep, evolutionary cognitive ability, a way of absorbing or reading what the psychologist James J Gibson called ‘affordances’. Gibson’s affordances can be understood as all the things that surround an organism in their environment, with potential to be understood, grasped and exploited. An affordance is relational: it depends on the ecological relationship between the animal and its lifeworld, rather than having an objective value. A freshly baked baguette is to a baker a proud symbol of her art; to the hungry child, it’s a meal; to the assistant at the boulangerie, an object to be arranged in the window. An affordance has meaning depending where you stand, and much of our grasp of affordances runs beneath conscious analysis. For social mammals, including humans, many of the affordances in our environment are social in nature, and thus we spend a huge amount of perceptual energy in processing signals of behaviour, demeanour and emotion from our fellows, much of which never surfaces to our conscious mind. For Plato, the imagination produces only illusion. For Aristotle, it’s a necessary ingredient to knowledge A chimpanzee, for example, sees the posture of the new guy as dominant – the dominance and subordinance exist in the real-time relationship between the two animals’ bodies and behaviours. The chimp doesn’t need to reason about the relationship, because the perception itself contains a great deal of information and prediction about status, disposition, character and possible behaviours. Stage actors ‘read the room’ in a similar way to our primate cousins reading their social world of dominance. A lifetime of subconsciously reading rooms (reading people) gives artists a rich palette of insights, feelings and behaviours. Unlike other animals, humans use phantasia to expand these affordances and create alternative behaviours – alternative realities – in the real-time present, as well as in the future. We take social affordances from our existing lifeworlds and spin new worlds out of them. That is the power of phantasia, but also, as we will see, its danger. Some people think that the imagination is just a frivolous fantasy-making ability. For Plato, the imagination produces only illusion, which distracts from reality, itself apprehended by reason. The artist is concerned with producing images, which are merely shadows, reflecting, like a mirror, the surface of things, while Truth lies beyond the sensory world. In the Republic, Plato places imagery and art low on the ladder of knowledge and metaphysics, although ironically he tells us this in an imaginary allegory of the cave story. By contrast, Aristotle saw imagination as a necessary ingredient to knowledge. Memory is a repository of images and events, but imagination (phantasia) calls up, unites and combines those memories into tools for judgment and decision-making. Imagination constructs alternative scenarios from the raw data of empirical senses, and then our rational faculties can evaluate them and use them to make moral choices, or predict social behaviours, or even build better scientific theories. For Aristotle, phantasia (which comes from the Greek word for ‘light’), is as important to knowledge as light is to seeing. Although Aristotle was careful to distinguish phantasia from the ordinary five senses, because it can occur without any stimulus from outside, we could understand phantasia as a kind of sixth sense, shared by humans and many animals, a way to know the world, to which humans return in dreams. Here, Aristotle is thinking of imagination as something like the involuntary process; the associational mashups of dreams, the subconscious tracking of affordances, the conditioned memories we use to evaluate and make sense of our experience. When we bring this process under executive control – that is, when we harness it to our waking, speculative and creative mind – we transform the involuntary imagination to voluntary, and this ‘phantasia 2.0’ is unique to humans. Perhaps a chimpanzee might dream of a hippo it once saw, but only a Walt Disney can bring the hippo to mind whenever he wants, dress it in a tutu in his mind’s eye, draw it, animate it dancing, and release it as a film called Fantasia (1940). Contemporary science of the mind sides with Aristotle, not Plato. Phantasia is adaptive and helps us know others and ourselves better. Art is not just great for therapeutic emotional management and catharsis, but also produces knowledge, generating new ways of understanding and manipulating the world. Contemporary neurocognitive theory argues that the mind is a ‘prediction processor’. It builds mental models of the world, and tests predictions, always updating the model to reduce future errors. These cognitive processes are not possible without the imaginative faculty. The imagination helps us create possible futures (new architecture, medical breakthroughs, new political possibilities) but also helps us model other minds. When art is good – when the acting and the script are on point, or a character in a novel is nuanced – the audience actually learns more about human behaviour than real-life observation provides. This is because the interior of the character is articulated in art, whereas it remains submerged in real social interaction. We are, then, running a constant ‘simulator’ in our own minds, whether we’re consciously aware of it or not. Because of this involuntary sixth sense, we seem to know things without having figured them out. The dark processing (reading affordances, absorbing impressions from the extended minds around us, involuntarily combining narratives in headspace, and just simulating things) serves up ‘reality’ to us without revealing its hand in the construction. The mind is always incubating an alternative or supplemental reality. Our experience is always imagination-laden. Yet the vivid, and often unconscious, nature of this cognitive process isn’t always enriching. If imagination is an involuntary creative act of cognition before downstream rationality uses it, it can also be dangerous. Without properly understanding imagination’s role in cognition, our views can present themselves to us as straightforward, accurate assessments of the world. People who disagree with us seem just ‘irrational’ (bad at weighing evidence and logic) or crazy. But once we take account of the imaginative layer of mind (the filtering and modelling we do between the raw data and the reasoned conclusions or beliefs), we see that the world itself really is different for the atheist as opposed to the Christian; the Republican as opposed to the Democrat; the rationalist versus the QAnon devotee. The legal scholars Cass R Sunstein and Adrian Vermeule argue that conspiracy theories arise when people suffer from a ‘crippled’ knowledge base because they have ‘limited’ informational sources. If you watch only one news network, or get your ‘facts’ from a crank website or radio show with no peer review, then you’re going to be highly susceptible to conspiracy and this will likely be exacerbated if you received limited school instruction in logic and critical thinking in your formal education. Thus, the answer to conspiracy theories is more education and more rational weighing of information sources. Conspiracy theories aren’t, however, just the result of alternative ‘information sources’ or limited information – we’re all awash in information. Rather, a potent conspiracy is a narrative arc in which the believer is a heroic character. Phantasia is a potent ingredient here. The persuasiveness of imagination consists in its embodied quality – the conspiratorial mind feels and sees itself as a protagonist in a drama. A dramatic story such as the QAnon theory is reinforced by a charismatic leader (politician/actor/clergy/celebrity), creating a phantasia layer that feels real, just as the dream feels real to the limbic system and the movie feels real to the audience member. No wonder then that conspiracy theorists like to dress up. The conspiracy-minded Trump supporters who smashed into the Capitol Building in Washington, DC in January 2021 included half-naked ‘Ur-Americans’ with painted faces and buffalo headdresses, carrying signs that said ‘Q Sent Me’. A charismatic leader is like the shaman/actor on stage. They have ‘gone before’ into the embodied belief, they evoke the emotions, they involve the watcher/audience so intensely that everybody gets deeply invested. The insurrectionists in their dress-up costumes at the Capitol are less like actors and more like fully immersed audience members­. The insurrection was a kind of malevolent cosplay convention in which superfans who had intensely internalised the narratives themselves took over the stage, only the ‘convention’ in this case was at the Capitol. Obviously, this makes them no less dangerous, because their guns are not props, and mob violence is wildly contagious. Can we turn that awesome power of imagination toward humanising ourselves and others? Our phantasia is not just ‘in our heads’ but actually extended and distributed into our environment. Just as the actor changes into costume and transforms into a new persona, so too the jingoist drapes himself in flags and paraphernalia becoming a new persona – one that feels righteous and empowered, in this case, to do violence. There is ‘magic’ in the accoutrement. Anthropologists and social psychologists have long recognised the unique dynamics in ritual adornment and behaviour. Ritualised collective imaginings help to produce what the French sociologist Émile Durkheim in 1912 called ‘collective effervescence’ – a feeling state or force that excites individuals and unifies them into a group. It’s a similar phenomenon in political crowds, religious ceremonies, music concerts and theatre experiences. In our current climate of partisan paranoia, we’ve all ramped up imaginative demonisation of the other. This leaves us vulnerable to dark imaginings. The Chinese American philosophical geographer Yi-Fu Tuan states it plainly in his book Landscapes of Fear (2013): ‘If we had less imagination, we would feel more secure.’ Yes, there are real threats and enemies out there, but not as many as our active imagination produces. Alas, we can’t stop fantasticating if it’s the root of human cognition, and we wouldn’t want to give it up if we could. But can we turn that awesome power of imagination toward humanising ourselves and others? Imagination recruits our natural empathy system and can amplify it. We see fear or joy in another person’s face, and we catch it like an emotional contagion. The actor has made a career of this natural human ability to recreate another’s feelings and perspectives within one’s self. Properly cultivated, this emotional mimicry can become ethical care, and art and artists play a crucial role in this cultivation. I have played some sinister characters doing some ethically dubious things in dark storylines. I’m not someone who thinks art must be ‘moral’ per se. A lot of art with really overt moral pretensions is usually pretty bad art. Having said that, we could be making better use of the imagination, making genuinely smart and nuanced characters. A lot of contemporary entertainment seems to me to have lazy renderings of characters. There’s a kind of shorthand going on: a character beats up someone in one scene, then kisses his mom in the next to show complexity and ambiguity, but it all feels too simple and easy sometimes. There’s a lot of contempt and cynicism in contemporary entertainment. The characters are contemptuous and cynical, and the impulses creating the characters are too. And there’s contempt for the audience: just give them crud. That’s always been a problem; I sound like an old-man moral scold. I’m all for the occasional mindless, nihilistic narrative, but the imagination is a hugely powerful tool and therefore weapon: if you’re gonna go morally dark or ambiguous, you’re gonna lacerate people, you better know why you are. You better be damn good at what you do, like Herman Melville good. It’s oddly easy to crank out something risky and edgy, we all think we know what that is, but most of it doesn’t really risk anything important, make real critiques of injustice or power. For sure: there’s really good stuff out there. But a lot of it’s weak, masquerading, performing its importance. It’s really difficult to be ‘true’ as in ‘authentic’. Believe me, I know, I’m shooting for it myself and frequently missing the mark. It’s difficult to show how real friendships form or end, how real grief is processed, real horror and pain are inflicted and borne, so on. You gotta be careful with the imagination. It matters how it’s wielded. There’s a lot of opportunity for critique, but hope too. Acting is like a ‘laboratory of identity’ because the actor gets to try on many different selves. Some of them are sinister and some saintly, with all points in between. The movie industry and the arts generally are also large-scale laboratories of identity for audiences. Such power carries some responsibility. But all of us have this power of phantasia – in fact we can’t escape it – so it’s on all of us to be better actors and even directors of our stories, individual and shared. Source of the article

Can Venice’s Iconic Crab Dish Survive Climate Change?

For more than 300 years, Italians have fried soft-shell green crabs, called moeche. But the culinary tradition is under threat Domenico Rossi, a fisherman from Torcello, an island near Venice, was 6 years old when he first went fishing with his dad. “I loved everything about it,” he says. “The long days out on the water, the variety of fish, even the rough winds that would sometimes capsize our boat.” Rossi vividly remembers picking up nets full of eels, cuttlefish, prawns, crabs, gobies and soles. But that rich biodiversity is now a distant memory. In the past 30 years, the population of many species native to Venice’s lagoon, a fragile ecosystem of brackish waters and sandy inlets, has shrunk. “At least 80 percent of species have gone,” Rossi says. The 55-year-old fishermen is one of the last trained to catch local soft-shell crabs. Scientifically named Carcinus aestuarii, the green crab is the key ingredient of a beloved local dish called moeche (pronounced “moh-eh-keh”), a word that means “soft” in Venetian dialect. Dipped in eggs, dredged with flour and fried, these crabs are usually served with a splash of lemon and paired with a glass of local white wine. The origin of this dish goes back to at least the 18th century—it was mentioned in the 1792 volume on Adriatic fauna by Italian abbot and naturalist Giuseppe Olivi. As Olivi described, moeche are only found twice per year, during spring and fall, when changes in water temperatures trigger crabs to molt. Until ten years ago, it was common to find fried moeche in osterias and bacari, or informal wine bars, across Venice’s lagoon, from Chioggia in the south to Burano in the north. Recently though, it has been increasingly hard to find them. Fishermen report a 50 percent decline in catch just in the past three years. As climate change, pollution and invasive species put pressure on local species, fishermen, chefs and locals may need to rethink their centuries-old food traditions. A fragile ecosystem Spanning 212 square miles, from the River Sile in the north to the River Brenta in the south, Venice’s lagoon is the largest wetland in the Mediterranean. Only 8 percent of the lagoon is made up of islands, including Venice, while the remaining surface is a mosaic of salt marshes, seagrass wetlands, mudflats and eutrophic lakes. These diverse habitats, characterized by various degrees of salinity and acidity, have historically been home to a rich variety of species. But in the past three decades, the impact of pollution from nearby industries, erosion due to motorboat traffic and warming waters have put pressure on the lagoon’s fragile ecosystem. This period coincided with the installation of MOSE, a system of movable floodgates designed to temporarily seal the lagoon from the Adriatic Sea to protect inhabited areas from sea-level rise. While essential to Venice’s survival, MOSE now prevents high-tide waters from reaching the innermost parts of the lagoon, preventing the influx of oxygen and nutrients that come with seawater and halting the formation of sandbars and salt marshes. As a result of these changes, many habitats have degraded and some native species have been hard hit. The green crab is found in many parts of the Mediterranean, including Italy, France, Spain and Tunisia. But it is only in Venice’s lagoon, in places like Chioggia, Burano or Torcello, that fishermen have developed a special technique to capture this crustacean during its molting phase. Like all crustaceans, green crabs molt while growing. During molting, they shed their outer shell, leaving behind an edible internal soft-shell. Fishermen in Venice’s lagoon have learned how to identify and catch molting crabs. “You need to learn to spot the signs on crabs’ shells to know if they are about to molt,” Rossi explains. “It takes years of just watching how your elders do it, and eventually you learn.” Crabs are typically caught 20 days before the start of the molting process. Once caught, crabs are placed in cube-shaped nets along the shores of canals. Fishermen, or moecanti as they are called locally, check them up to twice a day to spot signs of impending molting. About two days before their shell-shedding process, they are placed in another container. “Once there, you have to check them more frequently to pick them up right when they shed their shell and they are soft,” Rossi says. As crabs get closer to molting, they become weaker, and they can fall prey to younger, stronger crabs. A key part of a moecanti’s job is to constantly check the catch to prevent this sort of cannibalism, Rossi explains. “You have to pick out the weak ones and separate them from the rest,” he says. “It takes decades just to be able to tell where crabs are in their maturation process.” After molting, soft-shell crabs are usually sold and cooked within two days. When Rossi was a child, soft-shell crabs were abundant and considered part of Venice’s affordable rural foods known as cucina povera. But today’s scarcity has turned what was once an inexpensive fishermen’s food into a highly sought-after delicacy. Just six years ago, moeche sold for €60 per kilogram. The price of one kilogram of moeche can now reach €150, Rossi explains. Green crab goes out, blue crab comes in It’s hard to find accurate data on the green crab population of Venice’s lagoon. Scientists mostly rely on data from fishermen. “Based on fishermen’s catch, we can say that there has been an overall decrease of green crab in the past 50 years,” says Alberto Barausse, an ecologist at the University of Padua who has studied the impact of heatwaves on green crabs in the Venice lagoon using data from fishermen’s catch since 1945. Reasons for the decrease of green crabs are complex, Barausse explains. As detailed in his 2013 study, heatwaves can stress green crabs during their early embryo stage, making them less resilient to future threats. Changing rain patterns, with less constant rain but more frequent extreme precipitation, are changing the lagoon’s salinity levels, with a cascade of effects on its ecosystem. For example, higher salinity and warmer temperatures have incentivized the arrival of Mnemiopsis leidyi, a gelatinous marine invertebrate that eats mostly zooplankton, including the larvae of the green crab. Warmer waters have also contributed to the arrival of another highly invasive species, the blue crab. A native species of the Atlantic Ocean, the blue crab was first spotted in Venice’s lagoon around 1950. It is only in recent years that it found conditions suitable to fully expand its presence there. “Up until a few years back, water temperatures during winter were too cold for blue crabs,” says Fabio Pranovi, an ecologist at Ca’ Foscari University in Venice. “But thanks to warming waters, blue crabs now live and reproduce in the lagoon throughout the winter.” Since 2023, the blue crab population in Venice lagoon has exploded. From an ecological standpoint, blue crabs are considered an invasive species, Pranovi explains, because they compete with native species like the green crab for shelter and food. They don’t yet have a significant predator, so they are growing at a much faster rate than native species. As explained by Filippo Piccardi, a postdoctoral student in marine biology at the University of Padua who wrote a thesis on the impact of the species in Venice’s lagoon, blue crabs are omnivorous predators who have found their ideal prey among many of the lagoon’s keystone species, such as clams and mussels. In 2024, the impact of blue crabs on local clams was so acute that local authorities declared a state of emergency. For fishermen, these blue invaders are an enemy to battle with daily. “I can’t count the times I had to replace my nets in the past two years,” Rossi says. Traditional moeche fishermen like Rossi still make their fishing nets by hand. Each family has its own way of doing it, almost like a secret recipe, he explains. Because these handmade nets are used to catch green crabs, which measure around 4 inches across, they are close-knit with small holes. Blue crabs, which measure up to 9 inches, have much larger claws than green crabs, so they easily break net threads. “They are wickedly smart,” say Eros Grego, a moeche fisherman from Chioggia. “They come, break our nets and just wait there to feast on whatever was in the net.” Damage from blue crab has been so significant that Rossi is considering replacing his nylon nets with iron cages. “It costs me about €20 to make a kilo of net,” he says. “If I have to replace them every season, it’s going to cost me a fortune.” Blue crabs also eat green crabs, Pranovi says, and, according to Rossi, they have been feasting on their smaller local cousins with gusto thanks to their size and speed. “When you see them underwater, it’s just striking,” Rossi says. “Local crabs are so much smaller and can only move on the seabed, while these crabs are twice their size and can swim really fast across the water.” In 2025, Rossi has not caught any green crabs that would be suitable for moeche. “It’s the first year that I find zero moeche,” he says. “All I find in my nets is blue crabs and some date mussels.” Grego, who works in the deeper southern lagoon, is having a similar experience. “We were already dealing with shrinking catch due to heatwaves and extreme rainfall,” he says, adding that changes in climate patterns had made the traditional molting season less predictable. “The blue crab is the straw that broke the camel’s back.” Changing traditions? The arrival of blue crabs in Venice lagoon and the simultaneous decrease of the native green crabs are pushing some chefs to rethink traditional cuisine. Venissa, a one-Michelin-starred and green-Michelin-starred restaurant on the island of Mazzorbo, in the north of the lagoon near Torcello, has decided to no longer serve green crab. “Our philosophy is to cook dishes that don’t undermine the lagoon’s ecosystem,” says chef Francesco Brutto, who has been running Venissa with his partner, Chiara Pavan, since 2015. The couple embraced this style of low-impact cooking after noticing how Venice’s lagoon changed during the Covid-19 pandemic, when pressure from human activities like tourism was eased. “We spotted species we had not seen in years, like turtles and dolphins,” Brutto says. “So we decided to have as little impact as possible.” For that reason, Venissa mostly serves plant protein, Brutto explains. Animal protein is used only from species that are not threatened. That means invasive species like veined rapa whelk and blue crab are now fixtures of Venissa’s menu. “Right now, eating green crab is the equivalent of eating an endangered dolphin,” Brutto explains. Venissa still offers moeche, the chef clarifies, but they make it with blue crab. “Moeche of blue crab taste better in my opinion. There is more pulp compared with green crab,” he says. But not everyone is ready to give up traditional moeche. Ristorante Garibaldi, a traditional fish restaurant in Chioggia, has been serving moeche since it opened in the 1980s. “Our clients come here specifically to eat moeche,” says chef Nelson Nemedello. This year, Nemedello could only find about 800 grams of moeche from a local fisherman. “Prices are becoming insane. I paid them €170 per kilo,” he says. But demand is there, despite the price, so Nemedello and his wife keep serving green crabs. “It’s considered a food unique to this place, so people are willing to pay more for it.” According to Fabio Parasecoli, author of Gastronativism: Food, Identity, Politics, sticking with traditional foods can be a way to cling to local identity during times of rapid political and economic change. Traditional foods have always been intertwined with people’s sense of identity, he says, but in the past 20 years there has been a stronger identification with food in many parts of Italy, partly as a backlash against globalization. “It’s a little bit like saying this food is who we are,” he says. “If you take this away from us, then who are we?” In the case of a place like Venice, tourists’ expectations of a specific type of local gastronomic identity also play a role. “If tourists come to Venice expecting to eat traditional food like moeche, then restaurants may feel like they have to offer that,” Parasecoli explains. Plus, as Pranovi notes, it takes time for people to adjust to new flavors. “Some people find moeche made of blue crabs too big while others say the taste is not as subtle,” he says. “It is going to take time for people to change their expectations around how moeche should taste.” Changes in species distribution have always shaped food traditions. Parasecoli cites the example of potatoes, a species native to the Americas that became a widespread ingredient in European cuisine after its arrival from the New World in the 16th century. But in Venice, the pace of change feels fast to many locals. “I grew up in the lagoon, and it’s always been slightly changing. But in the past seven to eight years, I hardly can recognize it,” Rossi says. “It feels like being on the moon.” This pace of change is leaving fishermen and local authorities to play catch-up. Since the blue crab invasion started in 2023, authorities have ordered the capturing and killing of blue crabs. But Piccardi, who studied the impact of the blue crab for his thesis, says trying to erase a fast-growing population that has found optimal environmental conditions is unrealistic. “Our advice is to focus on catching female crabs specifically in order to slow down reproduction,” he says. “And, ultimately, to learn to coexist with this new species.” Fishermen like Rossi and Grego are adapting. “In the past three years, I have mostly caught blue crab,” Rossi explains. “I might as well shift the focus of my fishing.” While open to the idea of catching blue crab, Rossi doubts that this shift can guarantee a living. “There isn’t really a market for blue crab. They sell for less than €10 per kilo.” Tunisia, which is also dealing with massive uptakes in blue crabs, has developed a blue crab industry and established canning factories, Rossi notes. “If we did the same here, perhaps there would be some more opportunities.” Future prospects While fishermen are skeptical that their centuries-old livelihood can bounce back—Rossi nudged his son to find another career—scientists are careful to make any definitive predictions. “Things are still evolving,” Pranovi says. “When new species arrive, it takes time for ecosystems to adjust.” Green crabs may learn to cope with pressure from heatwaves thanks to oxygen released by salt marshes, Barausse says. But rising water temperatures, extreme weather events and the more frequent use of MOSE are all likely to destabilize local species, according to Pranovi. With such dynamics at play, the only way for Venice’s iconic crab dish to survive may be to change its core ingredient. This may become a familiar tale in other parts of the world. “As climate change keeps undermining the habitats of traditional species, the tension between preserving tradition and adapting with new foods will become more and more common,” Parasecoli says. Ironically, the very places where the blue crabs came from—such as the Atlantic coast of North America—now deal with an invasion of their own: European green crabs. What’s the solution? Eat them. Source of the article

GOATReads: Psychology

What Makes Some Dreams Impossible to Forget?

Dream carry-over effects can be invitations to dialogue with the unconscious. An often overlooked finding of modern dream research is that dreams are generally forgotten. The human brain cycles through four or five phases of rapid eye movement (REM) sleep during an average night’s slumber, and if REM sleep is a reliable trigger of dreaming, that means everyone is forgetting nearly all the dreams that pass through their minds each night. Not remembering most of our dreams seems to be a normal, natural feature of psychological functioning. Why, then, do we remember any dreams at all? Part of the answer is that some dreams are simply impossible to forget. Setting aside personal interest, cultural influence, and other external factors, there seems to be an innate tendency within all people to experience highly intensified dreams that make a strong impact on waking awareness. Such dreams may be rare, and their impact may diminish over time, but they clearly demonstrate that some of the dreams that cross the memory threshold do so because of their vivid experiential qualities, what I and other researchers call carry-over effects. Varieties of Carry-Over Effects Carry-over effects are feelings, sensations, and bodily responses from dreaming that are still experienced even after awakening. It’s like a part of the dream world manages to seep into the waking world. Different kinds of dreams have different kinds of carry-over effects. For example, an intense nightmare of being chased by a frightening stranger can have the carry-over effects of awakening in a full-body sweat, muscles trembling, with increased respiration and heart rate. Alternatively, a dream of a pleasant romantic encounter can lead to carry-over effects of strong genital arousal, occasionally leading to climax. Vivid dreams of flying and falling can both generate extremely realistic carry-over effects involving visceral sensations of gravity. This variety of carry-over effects shows that dreaming is not just a complex mental process, but a complex bodily process, too. Many different physiological systems can be activated during REM sleep and dreaming, but instead of being directed outward, as they are in the waking state, these systems are directed inward, toward the creation of the imaginal world of the dream. Possible Meanings of Carry-Over Effects Perhaps carry-over effects are merely glitches of the sleeping brain, the accidental side-effects of a random surge of energy during REM sleep, like a cup that spills when filled with too much water. That is possible, but at least two other explanations suggest a more adaptive value for dreams with these highly memorable qualities. First is that the wide variety of mental and physical systems stimulated in these dreams is itself the point. In our usual waking lives, we draw upon and actualize a mere fraction of our human potentials. To prevent the atrophy of those unused abilities and to keep them in a condition of functional readiness, dreams create highly lifelike scenarios in which those latent capacities may be expressed, exercised, and developed. From an evolutionary perspective, this attribute of dreaming contributes to our adaptive flexibility and readiness to act effectively in survival-related situations we have never encountered in waking life. A simple analogy would be running a car engine for an hour a day during a cold winter. The car isn’t actually going anywhere, but running the engine now will make it possible to drive the car in the future when the weather conditions change. A more therapeutically-focused explanation for dreams with carry-over effects is that they represent special calls for attention from the unconscious. They are signals of psychological importance and invitations to a dialogue with your dreaming self. With some dreams, the invitations may shade more into demands—you will pay attention to this, you will not forget it. A helpful approach to the interpretation of dreams with carry-over effects starts with a focus on the emotional continuities between dreaming and waking. To discern the meanings of these dreams, a good question to ask is where else these same feelings can be found in current waking life, whether in a relationship or a work project or a health-related issue. Whatever the situation may be, the dream is doing everything possible to highlight its emotional importance and make it a priority for waking awareness. Carry-over effects pose an intriguing oneiric paradox: the dream is not real, but it has real effects on our bodies and emotions in the waking world. The scary monster chasing you isn’t real, but your beating heart and feelings of terror when you wake up are real. This paradox can quite naturally stimulate people’s curiosity about religious and spiritual questions regarding identity, perception, and the nature of reality. It seems the universal experience of highly memorable dreams with vivid carry-over effects, occurring in cultures all over the world and throughout history, has in this way played an impactful role not only in the individual lives of the dreamers but also in the broader growth of religious and spiritual systems of belief. Source of the article

GOATReads: Philosophy

Record everything!

Our memories are precious to us and constitute our sense of self. Why not enhance them by recording all of your life? Current technology allows for radical memory enhancement: smartphones can­ record (and transcribe) every conversation, and wearable cameras ­can capture hours of first-person audiovisual recording. We have excellent reason to record much more of our lives than we already do and thereby enhance our memory radically. The case is simple: our memory is immensely valuable to us, and we already record much of our lives using video and photography, messenger logs and voice messages. These records are valuable to us in significant part because they enhance our memory and thereby promote its value. Recording those parts of our lives that we do not yet record would possess the same kind of value. Properly appreciated, this gives us reason to record much more (and create so-called lifelogs): nearly all of our conversations, everyday life and, in general, as many experiences as feasible. But this thesis faces important concerns, including worries about technological feasibility. Creating these records should ideally function without additional effort: they should be frictionless like messenger logs or the fictional technology in the Black Mirror episode ‘The Entire History of You’ (2011). A lifetime of records would take a lifetime to revisit in real time (with long stretches of little intrinsic interest). But we could revisit parts by searching by timestamp or tags, and the content of records could be automatically analysed, and software could generate transcripts and best-of cuts. Audiologs, transcripts and lower-resolution footage wouldn’t create storage problems, either. Objections from privacy and adverse psychological effects appear more significant. I will address these objections below, and will end with a plea: try recording almost everything before you rule it out. Why is our memory so valuable to us? Beyond its obvious role for survival, let us focus on three key aspects: first, we take pleasure in remembering and reminiscing. Second, our memories help us understand ourselves, others and our place in the world. Third, our memories play a crucial role for personal identity: who we are as persons is determined by our memories. These constitute our selves, so you are literally made, in part, of your memories. Our memories are valuable because they help make us who we are as individuals. The exact role memory plays for personal identity is subject to a philosophical debate going back at least to John Locke in An Essay Concerning Human Understanding (1689), who discussed the idea that a person remembering their previous experiences is both necessary and sufficient for that person’s identity through time. Many versions of the idea that personal identity requires some kind of psychological continuity between a person at an earlier time and a later time have since been developed. Building on this rich tradition – represented more recently by Alasdair MacIntyre, Charles Taylor, Derek Parfit and others – Marya Schechtman in ‘The Narrative Self’ (2011) argues that our selves are constituted by an autobiographical narrative formed from memories of our past experiences. On Schechtman’s view, who we are is partly determined by our autobiographical narrative and the memories on which this narrative builds; see also Dorthe Berntsen and David C Rubin’s Understanding Autobiographical Memory (2012). Given such views, it seems that a richer and deeper memory can quite literally turn you into a richer and deeper person. Richer and deeper memories appear to enhance your individuality: a thin and shallow autobiographical narrative appears to lead to a less substantial self, whereas a rich, detailed and deep autobiographical narrative appears to lead to a more substantial self. Assuming the latter is more desirable, a richer and deeper autobiographical narrative and the acquisition of memories that constitute it are more desirable. Consider current memory enhancement practices: why do we keep chatlogs, take pictures or write diaries at all? Of course reasons are plentiful: journaling can serve reflection; picture-taking has an artistic component; habit and device presets may play a role, etc. But we clearly value our records in large part because they enhance our memory. Our memory is valuable and this value is promoted by the records that enhance it. Records enhance our memory and thereby promote the three kinds of value just identified: we enjoy reminiscing by looking at our pictures and videos, and we understand ourselves and others better by revisiting chatlogs, social media posts and journal entries (moreover, records can – for better or for worse – be shared directly with others). But our records also enhance our autobiographical memories and thus help determine who we are as persons, allowing us to have richer personalities and a more complex individuality. A radical way to support this idea comes from the extended mind hypothesis first put forward by Andy Clark and David Chalmers in 1998, according to which external devices and the data they store can literally be part of our mind. According to this hypothesis, we extend our minds by using parts of our environment that can function for us in the way that parts of our brain do. In this vein, Richard Heersmink argues in ‘Distributed Selves’ (2016) that external information can literally constitute (autobiographical) memory and thus help determine who we are as persons. But the extended mind thesis is disputed, and it can be questioned whether external records themselves could indeed be memories: unlike records, memories have an autonomous character (memories come to mind, records normally don’t), a sense of intimate ownership, cognitive and emotional integration, and encompass all kinds of experiences, including moods, thoughts and whole conscious episodes. Through technology such as mind-machine interfaces, it may one day become possible to integrate records with our cognition as we do with biological memories, but today we can rely on a less radical alternative: external information can fail to constitute autobiographical memory proper but nonetheless help to inform and enhance our diachronic selves, just as autobiographical memory does. What matters about autobiographical memory vis-à-vis determining our selves seems to be our ability to construct and recount pieces of autobiography (for example, when wondering who we are or were, and how we ended up where we are). External records can enhance this ability and tie it more closely to reality, even if they don’t count as memories proper. Indeed, external records can be far more reliable in supplementing autobiographical narratives than relying on biological memories that can often be checked only against themselves. These are systematically distorted when recalled, and the act of recalling changes them further. External memory prompts aren’t subject to this and could tether us more reliably to reality than biological, subjective memories can. Just as memory disorders can diminish our personalities in undesirable ways, therapeutic memory enhancements through audiovisual records can help to restore them; see Aiden R Doherty et al’s paper ‘Wearable Cameras in Health’ (2013) and J Adam Carter and Richard Heersmink’s paper ‘The Philosophy of Memory Technologies’ (2017). Under ordinary conditions too, it seems that external memory records can help healthy individuals to develop richer and deeper selves. So, memory enhancement through records is valuable because it helps create pleasurable experiences of reminiscence and increases our understanding of ourselves and others, but also because it literally turns us into richer and deeper individuals, either because records themselves are external memories that constitute richer autobiographical narratives, or because their memory-like nature supports the continued creation of such a narrative. Insofar as becoming richer and deeper individuals is desirable, memory enhancement through records is also desirable. So far, I have argued for the value of records that most of us already create daily, based on the value of the memory that they enhance. But, when properly appreciated, it seems that the reasons that motivate these memory-enhancement practices should motivate us to record a lot more. Consider conversations and other experiences we don’t normally record, such as a conversation with a friend: if every conversation generated a chatlog (automatically transcribed by a digital device) or took the form of letters, you might come to cherish these like you cherish your biological memory. Searchability of such records is key, but we know that current technology allows this already. There are so many conversations we could record but don’t. More seems to be more here. We already record much, but significant parts of our lives remain fleeting (so many conversations, unexpected events, and much of everyday life including periods seemingly without remarkable events). Most people already record some noteworthy events (audiovisually or through writing). But remembering and recording unexpected, spontaneous but notable, mundane or recurring events is often just as valuable in retrospect. And even where single events aren’t obviously worth recording, many of them together form a significant part of our experience and contribute to who we are, just as painful or otherwise negative experiences can. Thus, even the value of records that aren’t pleasurable seems evident since they let us remember and understand ourselves better, even if we rarely revisit them. Recording social interaction allows reminiscing about it more accurately, improving our understanding of what was said and our picture of ourselves and others. Consider the last worthwhile conversation you didn’t record – wouldn’t it be good to have such a record, just in case? And if you had such a record, wouldn’t you want to keep it? Granted, most current (audiovisual) records miss much of our experiences, including inner speech, conscious experiences and emotions. But I am neither arguing that extensive audiovisual records should replace biological memories or other techniques such as journaling, nor that current recording technology can capture everything worth remembering. We already record much, you might say, why then record even more? Recording more might be valuable, but should we record everything? There is a risk of a status quo bias here. It is, however, unlikely that we have chanced upon the sweet spot of recording just the right amount. Answering what this is presumably requires personal reflection and experimenting with the technology, to determine which records one values, and decide what kind of person one wants to be. Playful imagination can help assess alternatives to our present way of life. Have you ever wished for perfect recall? Lifelogs are almost like this, although voluntary, accurate and restricted to recordable sensory modalities. Our present recording practices indicate how much we value memory enhancement. Imagine all the pictures you have ever taken and every logged message were deleted – how would you feel and why? I would feel devastated, like having lost a part of me and a prized basis of understanding myself and others in my life. Likewise, we can imagine already possessing extensive records (of every conversation we ever had, say), then losing many of them to end up with what we in fact possess. I imagine this as a comparable loss. We can reflect on our relationship to our enhanced counterparts by thinking about people with memory disorders who use lifelogs for therapeutic purposes. Our present recording practice isn’t only more desirable than the situation of people with impaired memory, but also better than that of past people who lacked the ability to create written or audiovisual records. From the imagined perspective of people with access to a universal, friction-free lifelog, our situation would likely appear comparably less desirable. This vision from the future gives us reason to pursue more extensive recording: more will be more! Not only would we benefit from recording more, our family and future generations could too. Diaries and letters already allow glimpses into the lives of our ancestors. But imagine how much better we could understand them had they (say, your great-grandparents, or perhaps Ludwig Wittgenstein) recorded everything! As it is, we don’t possess a single audio recording of Wittgenstein’s voice. You might also want to allow posteriority access to deadbots: language models trained on records of the dead to simulate responses their originators would have given. These might become uncannily life-like if trained on sufficient data – whether that would be desirable remains an open question; for a discussion, see Tomasz Hollanek and Katarzyna Nowaczyk-Basińska’s paper ‘Griefbots, Deadbots, Postmortem Avatars’ (2024). For instance, some philosophers continue teaching chatbots to impersonate their predecessors. Finally, a highly speculative possibility: digital immortality. Could collecting comprehensive data about someone lead, one day, to a reconstruction of that person? This idea faces vexing questions about personal identity and the underlying causes of consciousness, not to mention the morality of undertaking such a reconstruction; for a discussion, see Dan Simmons’s novel Hyperion (1989) which features a ‘cybrid’ (a human-AI hybrid) with the personality of John Keats, as well as Paul Smart’s essay ‘Predicting Me’ (2021). Given the preceding argument, memory enhancement through ubiquitous recording possesses significant value that any counterargument has to overcome: merely raising problems does not suffice to rule it out, though – a point illustrated by Plato’s Phaedrus, in which Socrates laments the negative effect of writing on biological memory. Nevertheless, we must discuss two serious challenges that could significantly constrain what we may record. The first concerns privacy and data autonomy. People have a presumptive right to privacy and at least some control over what data about them is collected. In most cases, consent must be acquired. Sometimes this is straightforward. Many allow people they trust to record much, and might become more eager to consent once they appreciate the value of extensive recording. Still, many will not ever want to be recorded. The resulting gaps in our records could partially be filled by journaling but, as with non-recordable experiences, sometimes we’ll have only our biological memory to go back to. But both accidental and intentional leaks (such as in revenge porn) remain a threat exacerbated by extensive recording practices. Powerful bad actors are another. Tech companies and governments have interests in records that often conflict with those of the general public. When Siri’s co-creator Tom Gruber praises AI-assisted memory enhancement, we should be wary, and the prospect of a police state with access to data on everything we’ve ever done should make us think carefully before proceeding down this path. We could consider a less privacy-friendly argument. If records partially constitute ourselves, prohibiting those required for deeper personal narratives infringes on the very core of our being and forces us to remain shallower than we could be. We would not restrict people with biological super-memories or excessive journal writers, and there is no prohibition on turning oneself into such a person. Analogously, if recording technology can constitute someone’s self, sanctioning it may appear an objectionable infringement upon our ability to self-constitute. Conceivably, privacy concerns could require the suppression of natural memory, but they don’t. One might think memory enhancement should be treated likewise. Evidently, this argument must address the fact that external memories are easier to share and subject to less distortion than biological ones. Answering these challenges requires much more work but, given the value of extensive records, I believe that concerns about privacy and autonomy should be addressed through technological means (like open-source software, encryption, automatic acquisition of consent, data deletion if desired) and legal means (like robust privacy rights and regulation of bad actors). Given my positive argument above, powerful reasons exist to implement such safeguards. We should enable people to enhance their memories safely and responsibly. Another important challenge is that recording everything could conceivably have negative psychological effects. Knowing such records to be available, why would we bother to remember anything for ourselves? Through lack of use, our biological memory might well atrophy (the use of digital maps and navigation appears to be having this effect on our ability to navigate our environs unaided). Extensive records might cause us to live in the past, become less open to new experiences, less able to cope with loss; being constantly recorded could promote self-censorship. On the other hand, conceivable positive effects include higher accountability and demands on one’s own behaviour; recording everything by default might allow us to live in the moment more; instead of straining our social relationships, it could make us more understanding of each other. We shouldn’t rely on speculation here – plenty of which exists in both sci-fi and in research like Björn Lundgren’s paper ‘Against AI-improved Personal Memory’ (2021) – but current empirical results appear ambiguous and don’t assess widespread use of lifelogs. Negative effects presumably vary individually, and it hasn’t been shown that they outweigh the value of lifelogs. Even in light of the previous challenges, I believe that we have compelling reasons to at least experiment with recording almost everything. Philosophy and empirical research can go only so far in establishing a technology’s consequences and desirability. However compelling the arguments, it seems plausible that the decision to radically enhance one’s memory must involve an element of individual preference. So, what kind of person with what kind of (extended) memory and recording practice would you like to be? Arguments and contemplation can help you think this through, but ultimately you must try for yourself. Source of the article

GOATReads:Sociology

Don’t Underestimate the Value of Professional Friendships

For decades, executives have repeated the truism “It’s not personal, it’s business,” implying that emotional distance is a hallmark of professionalism. But that logic is badly outdated, especially now that employees are spending more of their waking lives at work than with family or non-work friends, and the post-pandemic world has left many professionals more isolated than ever. The former U.S. Surgeon General has warned of an “epidemic of loneliness,” with profound consequences for workplace productivity, engagement, and retention, to say nothing of almost one million premature deaths worldwide each year. In knowledge-based organizations—now among the most dynamic and fastest-growing sectors of the economy—trust, psychological safety, and rapid learning are the currency of achievement, and these conditions flourish when people build genuine friendships. Today’s realities make it clear: Forming friendships through work is not only human; it’s a business and wellbeing imperative. I’ve spent decades studying what I call business friendships, and I can attest that these relationships produce personal and professional benefits, which include trust, emotional support, knowledge sharing, innovation, career advancement, and job performance. But many people face a barrier to obtaining those benefits. I call that barrier separate-worlds thinking, which is the idea that any interpersonal exchange involving money will ultimately be commodified and stripped of emotional value. I encourage leaders instead to adopt integrated-worlds thinking, which accepts and even celebrates relationships in which the personal and the professional overlap. In this article, I’ll offer a set of concrete actions for cultivating an integrated-worlds perspective. But first it’s important to look more closely at separate-worlds thinking, to better understand its allure and learn how to overcome it. The Allure of Separate Worlds This logic of separate-worlds thinking is both cultural and cognitive. Culturally, people learn that introducing money into personal exchanges contaminates their emotional meaning. Hence the persistent aversion in Western cultures to giving money—or even practical items—as gifts, because they fail “as an expression of real friendship.” Cognitively, psychologists point to “taboo tradeoffs”: When people are asked to weigh affection or loyalty against money, they experience moral confusion or outrage, which helps explain why putting a price on a child or human organ feels unthinkable. Separate-worlds thinking surfaces in business models. Airbnb began with a model in which hosts welcomed guests into their homes and collected payment only at the end of the visit, much like traditional guest houses.  The company’s founders realized their model had to change after they stayed as Airbnb guests themselves, befriended their hosts, and then experienced the awkwardness of engaging in a financial transaction with their new friends. Today, all Airbnb payments are handled in advance at arms’ length, to keep the worlds separate. Together, these norms and mental reflexes explain why many leaders feel a quiet unease about mixing the personal and professional. Recognizing their hold on us is the first step toward integrated-worlds thinking. The Power of Integrated Worlds Before I describe the advantages of integrated-worlds thinking, try this: On a blank piece of paper, draw a circle and label it “friends.” This represents the set of people to whom you would apply that label. Next, draw a second circle labeled “professional network.” This represents the people relevant to your professional success. Now comes the key decision: To what extent do those circles overlap? Make this decision honestly based on how you conceive of the two categories. When I’ve asked executives to do this, roughly 10% have drawn circles that barely touched, another 10% have drawn circles that overlapped by 50% or more—and the rest have fallen somewhere in between. I’ve run analyses on 1,500 executives and found that those with more overlap tend to have bigger professional networks, higher career satisfaction, and higher incomes. Why? Because friendship is the domain of social exchange, which is  effective for the exchange of information. Consider Neil Blumenthal and David Gilboa, who became friends as MBA students and founded Warby Parker, the disruptive direct-to-consumer eyewear company. Their initial informal partnership—reached over beers in a Philadelphia bar—was based on two commitments: Work hard on the company and remain friends. Blumenthal and Giloa have collaborated successfully since 2010 as co-CEOs, a challenging and fragile relationship, and they attribute this to the open communication that their friendship makes possible. In addition to growing Warby Parker into a national brand, they’ve also co-created an early-stage venture capital firm called—appropriately—Good Friends. Their friendship exemplifies how personal bonds can sustain collaboration where contracts or incentives would fail. Some readers might wonder about the price of such overlap. They might expect the integrated-worlds devotees to have “tainted” relationships, and to pay psychic costs from juxtaposing the personal and professional domains. In fact, my analysis shows the opposite: Integrated-worlds thinkers fare better socially and personally. Because they can savor the emotional content of their business friendships without guilt, they report more trust and closeness in those relationships, and on average they’re happier than those who maintain separate worlds. How to Integrate Your Worlds To cultivate an integrated-worlds outlook and the business friendships that come with it, I recommend taking the following four steps. Put personal before professional. My research has shown that despite what you might think, new business friendships are formed on purely personal bases—in particular, on shared values and personal identities. Instrumental interests don’t enter at all into the process. So, to make more business friendships you need to focus on what’s personal: Apple’s Steve Jobs and Steve Wozniak initially bonded over their shared identity as hackers, whereas the decades-long friendship of Berkshire Hathaway’s Warren Buffet and Charlie Munger was founded on shared values of integrity and long-term thinking. Although new business friendships tend to form on personal foundations, they endure because they provide professional benefits. It’s not that people consciously calculate the utility of a friendship; rather, when a relationship is no longer helpful in their work, it quietly fades. This pattern—forming relationships around personal identity and values but sustaining them through professional contribution—allows people to feel their friendships are authentic even as they benefit from them at work. The relationship between Brian Chesky and Joe Gebbia, who founded Airbnb, illustrates this dynamic. The two are friends who share the key values of creativity and community, and a defining identity as design thinkers. On the job, their skills are symbiotic such that together they can achieve things neither could alone, with Chesky as the visionary, and Gebbia as the hands-on innovator. Their professional relationship has evolved since 2022, when Gebbia stepped away from operational responsibilities to pursue philanthropic projects, but their personal relationship remains as warm as ever, with Chesky describing Gebbia as “family.” Their story illustrates a vital truth: Putting the personal connection first doesn’t just make collaboration possible. It makes it enduring. Expand your concept of friendship. People can typically maintain about 150 meaningful social relationships, spanning categories from intimates to good friends to casual friends. How you define “friend” determines how often you feel you can bridge the personal and professional. When I ask executives “What does friend mean to you?,” separate-worlds thinkers tend to give narrow definitions such as “someone you could ask for any favor” or “someone you could share any secret with.” These correspond to only the closest friendship categories and are too limited to allow the full benefits of integrating friendship with work. Integrated-worlds thinkers define a friend more expansively, often as “someone you like and would voluntarily spend time with,” which opens more of their relationships to overlap with their professional lives. This recognition of gradations is also useful for managing relationships. LinkedIn co-founder Reid Hoffman, for example, distinguishes between “allies,” who warrant frequent contact, high trust, and significant favors, and more casual “friendlies,” positive ties with whom can be maintained with less contact and smaller exchanges. Friendships of all kinds require time and effort, and matching those inputs to the closeness of the relationship helps ensure they remain both authentic and sustainable. Expand your concept of professional relevance. Just as expanding your definition of friendship widens your social circle, expanding your definition of professional relevance widens your professional circle.  If you expand both circles, you’ll make it more likely that they’ll overlap. With that in mind, look for professional potential in every personal relationship—and don’t feel bad about doing so. Casual friends in other organizations and industries might help you see opportunities to apply artificial intelligence for employee recruiting. Your neighbor might put you in touch with a potential investor or board member. Your college roommate, who works in a different industry, might offer you new perspective on a strategic decision. Your mother might support you during a stressful new product launch. When you can think in those terms, you’re practicing integrated-worlds thinking. I once did a study of creativity among early abstract artists and found that artists who formed more cross-disciplinary friendships (say, a poet befriending a sculptor) also tended to be more creative and become more famous. Separate-worlds thinkers tend to see friendships with people outside their day-to-day work as professionally irrelevant.  In contrast, integrated-worlds thinkers are more open to professional benefits coming from any source, and therefore more likely to find those benefits. Offer—or ask for—the first favor. A global consumer-products company once asked me to study its star research scientist. What distinguished him, I found, was his ability to build friendly relationships inside and outside the company. He built these relationships by offering his expertise to other scientists, who gratefully accepted it and reciprocated by giving him access to information and ideas that fueled his productivity. His success points to a key practice for building business friendships: Start by giving. Friendships are the domain of social exchange; they involve the give and take of favors. Offering a favor is therefore an effective way to begin or deepen a friendship, because it signals generosity. Taking the initiative also gives you control: You can come up with a favor that will help the other person but be easy to provide. Asking for favors builds connections too. Benjamin Franklin observed that the best way to make a friend is to borrow a book. When someone grants a favor, he believed, their liking for the recipient increases, a pattern that modern experiments have confirmed. You can take advantage of this by asking someone for advice—about your leadership, your organization, or anything you want to improve. If genuinely motivated, asking for guidance acknowledges the other person’s knowledge and creates an authentic human connection. Friendships thrive on reciprocity but when you think about trading favors, it’s important to be fuzzy rather than precise in your accounting. The more precise you get, the more likely you are to start thinking of imbalances in what you owe and are owed—and the harder it becomes, because of the taboo tradeoff, to invite the exchange of relational resources such as loyalty and affection that cannot be easily quantified. Consider Jamie Dimon, the CEO of JPMorgan Chase, who keeps a handwritten list with two columns, one of people he owes favors and the other those who owe him. This practice acknowledges the importance of social exchange but isn’t ideal. The problem is that the givers and receivers of favors value them differently, so social exchange does not lend itself to double-entry bookkeeping. A better approach would be to combine Dimon’s columns into a single list of friends with whom one exchanges favors, but to ignore the balance of favors unless it seems seriously off. That’s what I mean by fuzzy accounting. Start by giving. Don’t be afraid to ask. Keep the accounting fuzzy. These small acts can transform acquaintances into lasting friends and allies. The evidence from executives, entrepreneurs, and my own research is clear: When we allow friendship and work to coexist, our performance and our happiness both rise. By emphasizing values and identities when choosing friends, by broadening who counts as a friend or is professionally relevant, and by initiating the exchange of favors, you can integrate your worlds and create relationships that are as personally rewarding as they are professionally productive. In the end, the healthiest organizations and the most fulfilled leaders are those that treat friendship not as a distraction from business but as a powerful antidote to the isolation that undermines performance and well-being. Source of the article

Even If You’ve Never Seen ‘Seven Samurai,’ You’ve Certainly Seen Movies Influenced by It

Director Akira Kurosawa broke all the rules—and budgets—of Japanese filmmaking with his 1954 classic. But the final product influenced a generation of directors The film was long and getting longer by the day. Worse still, the director was bent on shooting outdoors, on a mountainous peninsula west of Tokyo, and the rain, during the interminably long production period between 1953 and 1954, was just as bent on falling in heavy sheets. Before production was anywhere close to wrapped, the budget was already ten times that of the average Japanese film, and the ballooning expenses led to extended hiatuses in shooting as the Toho Motion Picture Company scrambled to find more money. In the national press, meanwhile, the production was already a byword for bloat and dysfunction. The director had earned the disparaging nickname “Kurosawa Tenno,” or “Emperor Kurosawa,” a nod to his authoritarian bearing—and to the way his most ambitious film was expanding into its own miniature shogunate.  Seemingly untroubled by these costly, unproductive and highly publicized delays, the acclaimed Japanese director Akira Kurosawa would use the free time to go fishing. On one such trip, actor and frequent collaborator Minoru Chiaki asked the notoriously stubborn director if he was worried about the project completely running aground. “So long as my pictures are hits,” the director responded, while dangling his line off the banks of the Tama River, “I can afford to be unreasonable.”   Kurosawa once famously compared Japanese cinema to “green tea over rice”: light and wholesome. With this new film, Seven Samurai, an extraordinary action epic, he was seeking to make heartier fare, a film he hoped would be “entertaining enough to eat.”   Kurosawa’s intransigence was rewarded. The 207-minute Seven Samurai, with its extended scenes of swordplay and grandly conceived action sequences, was as innovative as it was epic, drawing superlatives from critics, turning a generation on to Japanese cinema and changing popular culture forever. Its story, characters, multicamera shooting and action choreography reoriented American moviemaking for generations. Even those who haven’t seen the movie itself have likely seen a version of it, whether in a straight remake, like The Magnificent Seven (1960), or in countless homages, from the Steve Martin comedy Three Amigos! to Pixar’s A Bug’s Life. Today, 71 years after its release, Seven Samurai still resonates. Each character seems unique, sketched in loving detail, while the filmmaking itself remains bracingly vivid. Eschewing the stiffer, sturdier theatrics of Japanese epics of the era, Kurosawa cut his action scenes using multiple cameras, giving them a heightened sense of vim and realism. Despite its faraway, 16th-century setting, Seven Samurai felt—and still feels—disquietingly modern. In the end, despite the rain and the bad press, Kurosawa’s masterpiece elevated not only the Japanese sword-fight movie but also many of the American westerns that inspired it, proving that great subtlety, and artistry, could be found in the domain of genres often dismissed as frivolous, or merely entertaining. Movies would never be the same. Born in Tokyo in 1910, Kurosawa was descended from actual samurai: The family traced its lineage back to warriors who had served an 11th-century warlord. His father, a former soldier, was proud of this bloodline. He was a stern disciplinarian, always instructing his family in what the filmmaker has called the “finer points of samurai etiquette.” But Kurosawa the elder did permit certain lighter entertainment, especially Western cinema. As a child, Akira studied traditional Japanese arts like calligraphy, as well as kendo, Japanese bamboo swordplay, at which he excelled despite his admitted lack of strength. But he also spent hours in movie theaters, taking in Japanese productions and Western imports. His older brother Heigo worked as a benshi: a narrator providing live commentary and context for Western silent movies screened for Japanese audiences. His brother’s work afforded the young Kurosawa not only exposure to a great many foreign films but also something just as important: free movie tickets. As a student in the 1920s, Kurosawa initially devoted himself to art at Kyoto’s Doshisha School of Western Painting. He soon realized his talents in that field would barely rise above illustrations for magazines, on the cheap. “I would knock myself out doing this,” Kurosawa had said, “But you can’t make a living this way.” Film, on the other hand, would pay. At the same time, his well-rounded student life was introducing him to all manner of thought and literature from Europe, particularly the plays of Shakespeare, the novels of Fyodor Dostoyevsky, and the economic and historical theory of Karl Marx. All this time burying himself in Western movies, literature and art informed Kurosawa’s approach as a young filmmaker. Starting in the mid-1930s, he worked his way up from an assistant director at Toho, one of Japan’s pre-eminent studios (then called PCL Cinema Studio). In subsequent years, his work would proudly exhibit this broad range of influences, including several Shakespeare adaptations and a film based on The Idiot by Dostoyevsky, the author who, Kurosawa thought, “writes most honestly about human existence.”   Kurosawa translated his early Japanese success into international interest with films like Drunken Angel (1948) and Stray Dog (1949), and by 1952, his Rashomon was voted the finest foreign film at the 24th Academy Awards.  Kurosawa initially conceived Seven Samurai as a slick, inventive retooling of the stuffier tradition of Japanese sword-fight pictures, called chanbara, which many Japanese critics compared to American westerns: simple stories filled with stock characters. But Kurosawa had more respect for Hollywood traditions than many of these critics did. He believed that, just as directors like John Ford had found a way to dignify and exalt the aesthetic and emotional landscapes of the studio western, he too could make a sword-fight movie that was historically detailed and immensely entertaining. The plot of Seven Samurai will almost certainly be familiar: In the countryside of 16th-century Japan, a group of villagers gets wind that a troupe of bandits is preparing to attack, right when the village’s barley harvest is due. With little military experience and practically no weapons at their disposal, these farmers enlist a group of warriors to come to their defense. We see the villagers recruit successive samurai, each a distinct type. There’s the wizened veteran Kambei (Takashi Shimura); the severe master swordsman Kyuzo (Seiji Miyaguchi); and the spirited, sake-glugging lout Kikuchiyo (Toshiro Mifune)—half Puck and half Falstaff, and not actually a real samurai at all. Once assembled, the motley squad trains the villagers in the basics of warfare. Then the bandits arrive, and the samurai lead the plucky farmers through an extended, breathlessly exciting defense of their village, and of their bountiful stores of barley. It’s a simple story—hence, in part, the near-infinite possibilities for retelling it. Film critic Sean Gilman considers Seven Samurai the universal model of the “men-on-a-mission movie.” And the film was formative for a revolutionary generation of American directors. George Lucas has recalled being dragged to a screening of Seven Samurai by his film school friend (and fellow director) John Milius: “After that I was completely hooked.” Steven Spielberg has called Kurosawa “a true visionary.” Martin Scorsese, invoking the samurai traditions of apprenticeship and ascendancy, once called Kurosawa his “master.” “Each new generation of filmmakers tries to elevate the genres they grew up on—make them artier, more sophisticated, more politically committed,” Gilman says. “The New Hollywood filmmakers saw that in Kurosawa. He made samurai films. But he made elevated samurai films. In the same way, they wanted to elevate westerns and gangster films.” For these American young guns, Kurosawa exemplified new ways of investing noir pictures, sci-fi stories and shark thrillers with a renewed sense of verse and seriousness. Though Kurosawa’s career spanned 50 years and some 30 films—many of them masterpieces—Seven Samurai remains his most influential, and most beloved. In 2022, the taste-making, canon-forming, once-a-decade poll conducted by the influential British film magazine Sight & Sound counted Seven Samurai among the 20 greatest movies of all time.  Akira Kurosawa died of a stroke in 1998 at the age of 88. His body was interred in a modest grave in a cemetery behind a small Buddhist temple overlooking the surfing beaches of Kamakura, a seaside town south of Tokyo known best for its many Zen shrines. It’s a particularly humble resting place that feels oddly fitting: In a bitter irony, Kurosawa’s overwhelming popularity in the West partly soured his reputation in his native Japan. Even Seven Samurai came in for a drubbing from some domestic critics; one writer griped that Kurosawa pitting Japanese characters against each other risked compromising political and social harmony in the brittle postwar period. In Japan, Kurosawa’s films were often regarded as macho, immature and even, in Roger Ebert’s account, “too Western.” For Kurosawa, such divisions had never really made sense. “The Western and the Japanese live side by side in my mind,” he once remarked, “without the least bit of conflict.”  “The Dick Van Dyke Show:” At the start of “The Night the Roof Fell In” (1962), spouses Rob and Laura have a fight when he comes home late from work. As Laura recalls the spat to a confidante, the audience sees her all dressed up, patiently serving dinner to an abrasive, unkempt, neglectful Rob. But when Rob recalls the argument at work, the flashback shows him dancing through the door like Fred Astaire, only to endure a grumpy Laura who hasn’t bothered to cook—or to wash the three-foot-tall stack of dirty dishes. “The Odd Couple:” “A Night to Dismember” (1972) finds Oscar roped into a dinner date with his ex-wife, Blanche—on the anniversary of their divorce. Each tells the story of their final fight, which occurred during a New Year’s Eve party. In Blanche’s flashback, she finds her husband smooching a female guest in the bedroom. Oscar’s shows him chivalrously putting that inebriated guest to bed. And the “vivid” memory of their friend Felix, who crashes the date, focuses on his conciliatory efforts and prowess as a ladies’ man. “All in the Family:” In “Everybody Tells the Truth” (1973), members of the Bunker family recount the day repairmen came to fix their fridge. Mike’s recollection shows his angry father-in-law, Archie, antagonizing the technicians. But Archie recalls everyone mistreating him, including a white repairman dressed like a mobster and a rude Black repairman in hippie garb. “Star Trek: The Next Generation:” In “A Matter of Perspective” (1990), Manua Apgar accuses Commander Will Riker of killing her husband and trying to assault her, so the crew of the USS Enterprise reconstructs the episode. Riker inputs his recollections into the ship’s computer, as do Apgar and her husband’s assistant. Their memories play as holographic movies before a tribunal. Riker insists Apgar is lying. But the Enterprise’s counselor disagrees, saying, “It is the truth as each of you remembers it.” “The X-Files:” In “Bad Blood” (1998), detectives Scully and Mulder travel to Texas to investigate bodies drained of blood. Their boss needs a report, so each detective lays out their version of the trip. Scully recalls shooting at a human intruder in their room at the Davey Crockett Motor Court, and missing. But Mulder recalls those bullets hitting their mark at the Sam Houston Motor Lodge—and an unharmed vampire leaping away. Source of the article

GOATReads:Politics

The U.S. Won’t Win the New Space Race by Defunding NASA

The U.S. wants to remain a superpower in space. It can’t without supporting NASA In the early 1400s, nearly a century before Columbus’s fateful voyage to the Americas, China seemed most poised to use maritime might to create a global empire. Beginning in 1405, Ming Dynasty admiral Zheng He commanded a fleet of immense “treasure ships” on a series of expeditions across the Indian Ocean, showcasing China’s wealth and strength as far afield as the eastern coast of Africa. But by 1433 the state-sponsored voyages had ceased. Scholars still debate what led 15th-century China to turn inward, ceding its power—and ultimately the discovery of what would become the New World—to others. But regardless of its cause, the missed opportunity is unquestionable. Today a strange echo of this episode is unfolding—on the high frontier of space rather than the high seas. This time, however, China is rising to prominence as the U.S. squanders its advantages. Unlike the Ming court that made no secret of decisively abandoning China’s naval aspirations, some U.S. leaders now embrace space as a vital, contested domain. But while they insist they’re setting a course for America’s continued dominance in space science, technology and exploration, their actions are contradicting and undermining that goal. Skepticism about, if not outright scorn for, civilian space spending is practically a bipartisan tradition in U.S. politics, but we are talking chiefly about the “Make America Great Again” policymaking of President Donald J. Trump. On July 20, the 56th anniversary of the Apollo 11 moon landing, the White House released a statement in which Trump proudly declared his administration was “reigniting the United States’ leadership in space” and pledged to return Americans to the moon and send them to Mars. Weeks earlier, thanks to Trump’s signature budget-reconciliation bill (the “Big Beautiful Bill”), NASA had received nearly $10 billion in additional funding for heavy-lift rockets, crewed spacecraft, and other things crucial to the Artemis program, which officially began during Trump’s first term. Acting NASA administrator Sean Duffy has repeatedly parroted similar talking points. During a September press conference, he said “we’re in a second space race right now; the Chinese want to get back to the moon before us. That’s not going to happen. America has led in space in the past, and we are going to continue to lead in space in the future.” The U.S., Duffy asserted, would achieve this feat in 2027. (Duffy’s remarks came a week after his Trump-appointed predecessor, Jim Bridenstine, more realistically testified to Congress that “unless something changes, it is highly unlikely the United States will beat China’s projected timeline [of 2030] to [send humans to] the moon’s surface.”) The Trump administration does deserve credit for some sound space policy—such as two executive orders, one in 2020 seeking to extend the economic sphere of the U.S. and its allies beyond low-Earth orbit and another in 2025 to supercharge U.S. capabilities by streamlining regulations for domestic commercial space companies. Similarly, this past August, Duffy announced the administration’s plans for NASA to fast-track readying a nuclear reactor for launch to the moon by 2030—a bold move meant to secure valuable lunar territory and eventually power U.S. outposts there. But these acts must be considered alongside other policies and proposals that influence U.S. scientific and technological prospects off-world and on Earth. Chief among these is the White House’s proposed spending budget for fiscal year 2026. Despite the boost to the Artemis program, Trump’s FY2026 proposal called for cutting NASA’s overall budget by about 25 percent, with the agency’s science division being slashed by nearly half. Advocacy groups such as the Planetary Society—as well as all seven living former NASA science chiefs—have condemned these proposed cuts as catastrophic for U.S. space science. The cuts, they warned, would lead to the cancellation of more than 40 ongoing and planned U.S. space missions. On Trump’s chopping block are high-profile, decades-in-the-making projects such as NASA’s Mars Sample Return mission and the next-generation Nancy Grace Roman Space Telescope. These have counterpart competitors in China, which is proceeding unimpeded toward space leadership. The cuts proposed for FY2026 are not the administration’s only harmful moves: White House actions have led to the shedding of more than 2,500 NASA staffers, most of them senior employees. Innumerable federal research grants have been canceled, suspended or delayed because of ideological litmus tests. Thousands of foreign students and skilled professionals have been blocked or discouraged from living and laboring in the U.S. by immigration and guest-worker policies. As the federally funded scientific enterprise staggers and a U.S. brain drain accelerates after these heavy blows, affecting both new and longtime workers, China and other nations are opening their doors to international students and scientists, including American ones, offering generous financial incentives and building state-of-the-art research hubs to attract talent from around the world. It’s hard to see how America’s losses across these myriad domains won’t lead to other nations’ gain, even if we can’t predict the marvelous opportunities we’ll be missing out on. And, just as with China’s befuddling decision to retreat from maritime greatness nearly 600 years ago, it’s harder still to understand why U.S. leaders today seem so eager to lose this new space race. Trump’s push to make America great again in space presumes America isn’t already the world’s greatest spacefaring power—which it demonstrably is, albeit perhaps not for much longer. Our nation’s continued greatness in space requires giving more support to government-sponsored R&D rather than less and respecting, not disdaining, science—irrespective of politics. Source of the article

The Problem With Letting AI Do the Grunt Work

Artificial intelligence is destroying the career ladder for aspiring artists. One of the first sentences I was ever paid to write was “Try out lighter lip stick colors, like peach or coral.” Fresh out of college in the mid 2010s, I’d scored a copy job for a how-to website. An early task involved expanding upon an article titled “How to Get Rid of Dark Lips.” For the next two years, I worked on articles with headlines such as “How to Speak Like a Stereotypical New Yorker (With Examples),” “How to Eat an Insect or Arachnid,” and “How to Acquire a Gun License in New Jersey.” I didn’t get rich or win literary awards, but I did learn how to write a clean sentence, convey information in a logical sequence, and modulate my tone for the intended audience—skills that I use daily in my current work in screenwriting, film editing, and corporate communications. Just as important, the job paid my bills while I found my way in the entertainment industry. Artificial intelligence has rendered my first job obsolete. Today, if you want to learn  “How to Become a Hip Hop Music Producer,” you can just ask ChatGPT. AI is also displacing the humans doing many of my subsequent jobs: writing promotional copy for tourism boards, drafting questions for low-budget documentaries, offering script notes on student films. Today, a cursory search for writing jobs on LinkedIn pulls up a number of positions that involve not producing copy but training AI models to sound more human. When anyone can create a logo or marketing copy at the touch of a button, why hire a new graduate to do it? These shifts in the job market won’t deter everyone. Well-connected young people with rich families can always afford to network and take unpaid jobs. But by eliminating entry-level jobs, AI may destroy the ladder of apprenticeship necessary to develop artists, and it could leave behind a culture driven by nepo babies and chatbots. The existential crisis is spreading across the creative landscape. Last year, the consulting firm CVL Economics estimated that artificial intelligence would disrupt more than 200,000 entertainment-industry jobs in the United States by 2026. The CEO of an AI music-generation company claimed in January that most musicians don’t actually enjoy making music, and that musicians themselves will soon be unnecessary. In a much-touted South by Southwest talk earlier this year, Phil Wiser, the chief technology officer of Paramount, described how AI could streamline every step of filmmaking. Even the director James Cameron—whose classic work The Terminator warned of the dangers of intelligent machines, and whose forthcoming Avatar sequel will reportedly include a disclaimer that no AI was involved in making the film—has talked about using the technology to cut costs and speed up production schedules. Last year, the chief technology officer of OpenAI declared that “some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place.” One great promise of generative AI is that it will free artists from drudgery, allowing them to focus on the sort of “real art” they all long to do. It may not cut together the next Magnolia, but it’ll do just fine with the 500th episode of Law & Order. What’s the harm, studio executives might wonder, if machines take over work that seems unchallenging and rote to knowledgeable professionals? The problem is that entry-level creative jobs are much more than grunt work. Working within established formulas and routines is how young artists develop their skills. Hunter S. Thompson began his writing career as a copy boy for Time magazine; Joan Didion was a research assistant at Vogue; the director David Lean edited newsreels; the musician Lou Reed wrote knockoff pop tunes for department stores; the filmmakers Martin Scorsese, Jonathan Demme, and Francis Ford Coppola shot cheap B movies for Roger Corman. Beyond the money, which is usually modest, low-level creative jobs offer practice time and pathways for mentorship that side gigs such as waiting tables and tending bar do not. Having begun my own transition into filmmaking by making rough cuts of video footage for a YouTube channel, I couldn’t help but be alarmed when the makers of the AI software Eddie launched an update in September that can produce first edits of films. For that YouTube channel, I shot, edited, and published three videos a week, and I received rigorous producer notes and near-immediate audience feedback. You can’t help but get better at your craft that way. These jobs are also where you meet people: One of the producers at that channel later commissioned my first produced screenplay for Netflix. There’s a reason the Writers Guild of America, of which I am a member, made on-set mentorship opportunities for lower-level writers a plank of its negotiations during the 2023 strike. The WGA won on that point, but it may have been too late. The optimistic case for AI is that new artistic tools will yield new forms of art, much as the invention of the camera created the art of photography and pushed painters to explore less realistic forms. The proliferation of cheap digital video cameras helped usher in the indie-film explosion of the late 1990s. I’ve used several AI tools in ways that have widely expanded my capabilities as a film editor. Working from their bedrooms, indie filmmakers can deploy what, until recently, were top-tier visual-effects capabilities. Musicians can add AI instruments to their compositions. Perhaps AI models will offer everyone unlimited artistic freedom without requiring extensive technical knowledge. Tech companies tend to rhapsodize about the democratizing potential of their products, and AI technology may indeed offer huge rewards to the savvy and lucky artists who take maximum advantage of it. Yet past experience from social media and streaming music suggests a different progression: Like other technologies that promise digital democratization, generative AI may be better poised to enrich the companies that develop it than to help freelance creatives make a living. In an ideal world, the elimination of entry-level work would free future writers from having to write “How to Be a Pornstar” in order to pay their rent, allowing true creativity to flourish in its place. At the moment, though, AI seems destined to squeeze the livelihoods of creative professionals who spend decades mastering a craft. Executives in Silicon Valley and Hollywood don’t seem to understand that the cultivation of art also requires the cultivation of artists. Source of the article