Verify it's really you

Please re-enter your password to continue with this action.

CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

GOATReads: Psychology

Why Do Some People Thrive on So Little Sleep?

Short sleepers cruise by on four to six hours a night and don’t seem to suffer ill effects Everyone has heard that it’s vital to get seven to nine hours of sleep a night, a recommendation repeated so often it has become gospel. Get anything less, and you are more likely to suffer from poor health in the short and long term—memory problems, metabolic issues, depression, dementia, heart disease, a weakened immune system. But in recent years, scientists have discovered a rare breed who consistently get little shut-eye and are no worse for wear. Natural short sleepers, as they are called, are genetically wired to need only four to six hours of sleep a night. These outliers suggest that quality, not quantity, is what matters. If scientists could figure out what these people do differently it might, they hope, provide insight into sleep’s very nature. “The bottom line is, we don’t understand what sleep is, let alone what it’s for. That’s pretty incredible, given that the average person sleeps a third of their lives,” says Louis Ptáček, a neurologist at the University of California, San Francisco. Scientists once thought sleep was little more than a period of rest, like powering down a computer in preparation for the next day’s work. Thomas Edison called sleep a waste of time—“a heritage from our cave days”—and claimed to never sleep more than four hours a night. His invention of the incandescent lightbulb encouraged shorter sleep times in others. Today, a historically high number of U.S. adults are sleeping less than five hours a night. But modern sleep research has shown that sleep is an active, complicated process we don’t necessarily want to cut short. During sleep, scientists suspect that our bodies and brains are replenishing energy stores, flushing waste and toxins, pruning synapses and consolidating memories. As a result, chronic sleep deprivation can have serious health consequences. Most of what we know about sleep and sleep deprivation stems from a model proposed in the 1970s by a Hungarian Swiss researcher named Alexander Borbély. His two-process model of sleep describes how separate systems—circadian rhythm and sleep homeostasis—interact to govern when and how long we sleep. The circadian clock dictates the 24-hour cycle of sleep and wakefulness, guided by external cues like light and darkness. Sleep homeostasis, on the other hand, is driven by internal pressure that builds while you’re awake and decreases while you’re asleep, ebbing and flowing like hunger. There’s variation in these patterns. “We’ve always known that there are morning larks and night owls, but most people fall in between. We’ve always known there are short sleepers and long sleepers, but most people fall in between,” says Ptáček. “They’ve been out there, but the reason that they haven’t been recognized is that these people generally don’t go to doctors.” That changed when Ptáček and his colleague Ying-Hui Fu, a human geneticist and neuroscientist at the University of California, San Francisco, were introduced to a woman who felt that her early sleep schedule was a curse. The woman naturally woke up in the wee hours of the morning, when it was “cold, dark and lonely.” Her granddaughters inherited her same sleep habits. The researchers pinpointed the genetic mutation for this rare type of morning lark, and after they published their findings, thousands of extreme early risers came out of the woodwork. But Fu recalls being intrigued by one family who didn’t fit the pattern. These family members woke up early but didn’t go to bed early, and they felt refreshed after only about six hours of sleep. They were the first people identified with familial natural short sleep, a condition that runs in families like other genetic traits. Fu and Ptáček traced their abbreviated slumber to a mutation in a gene called DEC2. The researchers went on to genetically engineer the DEC2 mutation into mice, showing that the animals need less sleep than their littermates. And they found that one of the gene’s jobs is to help control levels of a brain hormone called orexin, which promotes wakefulness. Interestingly, orexin deficiency is a leading cause of narcolepsy, a sleep disorder marked by episodes of excessive daytime sleepiness. In people with short sleep, however, orexin production appears to be increased. Over time, the team has identified seven genes associated with natural short sleep. In one family with three generations of short sleepers, the researchers found a mutation in a gene called ADRB1, which is highly active in a region of the brain stem, the dorsal pons, that’s involved in regulating sleep. When the scientists used a technique to stimulate that brain region in mice, rousing them from their sleep, mice with the ADRB1 mutation woke more easily and stayed awake longer. In a father-son pair of short sleepers, the researchers identified a mutation in another gene, NPSR1, which is involved in regulating the sleep-wake cycle. When they created mice with the same mutation, they found that the animals spent less time sleeping and, in behavioral tests, lacked the memory problems that typically follow a short night’s sleep. The team also found two distinct mutations in a gene called GRM1, in two unrelated families with shortened sleep cycles. Again, mice engineered with those mutations slept less, with no obvious health consequences. Like mice, people who are naturally short sleepers seem to be immune to the ill effects of sleep deprivation. If anything, they do extraordinarily well. Research suggests that such people are ambitious, energetic and optimistic, with remarkable resilience against stress and higher thresholds for pain. They might even live longer. Based on the findings in short sleepers, some researchers think it may be time to update the old two-process model of sleep, which is how Ptáček developed the idea of a third influence. The updated model might unfold like this: In the morning, the circadian clock indicates it is time to start your day, and sleep homeostasis signals you’ve gotten enough sleep to get out of bed. Then a third factor—behavioral drive—compels you to go out and do your job, or find a mate, or gather sustenance. At night, the process goes in reverse, to calm the body down for sleep. Perhaps short sleepers are so driven that they are able to overcome the innate processes that keep others in bed. But it may also be that, somehow, the brains of short sleepers are built to sleep so efficiently that they are able to do more with less. “It’s not like there’s something magical about your seven to eight hours,” says Phyllis Zee, director of the Center for Circadian and Sleep Medicine at Northwestern University. Zee can imagine countless ways that short sleepers’ brains could be more efficient. Do they have more slow-wave sleep, the most restorative sleep stage? Do they generate higher amounts of cerebrospinal fluid, the liquid that bathes the brain and spinal cord, enabling them to get rid of more waste products? Is their metabolic rate different, helping them cycle in and out of sleep more quickly? “It’s all about efficiency, sleep efficiency—that’s how I feel,” says Fu. “Whatever their body needs to do with sleep, they can get it done in a short time.” Recent studies from Fu and Ptáček suggest that naturally short sleepers may be more efficient at removing toxic brain aggregates that contribute to neurodegenerative disorders like Alzheimer’s disease. The researchers bred mice that had short sleep genes with mice that carried genes predisposing them to Alzheimer’s. The Alzheimer’s mice developed a buildup of abnormal proteins—amyloid plaques and tau tangles—that, in humans, are hallmarks of dementia. But the brains of the hybrid mice developed fewer of these tangles and plaques, as if the sleep mutations were protecting the animals. Fu believes that if she conducted similar studies in models of heart disease, diabetes or other illnesses associated with sleep deprivation, she would get similar results. It isn’t yet clear how the short sleeper genes identified thus far shield people from the ill effects of poor sleep, or how the mutations in these genes make sleep more efficient. To get at the answer, Fu and Ptáček started bringing short sleepers to their joint laboratory to measure their brain waves while they slept. Their sleep study was derailed by the Covid-19 pandemic, but they are eager to get it back on track. The researchers are also interested in understanding other sleep outliers. Sleep duration, like most behaviors, follows a bell curve. Short sleepers sit on one end of the curve, long sleepers on the other. Fu has found one genetic mutation associated with long sleep, but long sleepers are challenging to study because their schedules don’t align with the norms and demands of society. Long sleepers are often forced to get up early to go to school or work, which can result in sleep deprivation and may contribute to depression and other illnesses. But though sleep has a strong genetic component, it can also be shaped by the environment. Knowing that better sleep is possible, and understanding the basis, could point the way to interventions to optimize sleep, enabling more people to live longer, healthier lives. Zee’s lab, for example, has tinkered with using acoustic stimulation to boost the slow waves of deep sleep that enhance memory processing and may be one of the secrets to short sleepers’ success. In a study, they played pink noise—a softer, more natural sound than white noise, more akin to rain or the ocean—while study participants slept. The next day those participants remembered more in a test of learning and recalling word pairs. “We can enhance memory, but we’re not making them sleep longer or necessarily shorter,” says Zee. “I think there’s a lot more to learn.” For now, researchers recommend that people focus on getting the amount of sleep they need, recognizing it will be different for different people. Ptáček still bristles when he hears someone preach that everybody has to sleep eight hours a night. “That's like saying everybody in the population has to be 5 foot 10,” he says. “That's not how genetics works.” Source of the article

Did Water Form in the Earliest Years of the Universe?

A recent study suggests huge volumes of the molecule emerged during the cosmic dawn Do us a favor: take a sip of water. Done it? Good. You probably needed rehydrating, but more importantly, I need to tell you something about the universe. Did you know that some of those water molecules were filtered through the trunk of an ancient tree that grew on Antarctica long before any ice covered it? Those same molecules were also once stolen by a plant that graced a hilltop on a planet that had yet to see a single flower. Before that, a mighty dinosaur drank from a pool that was once home to at least one of those molecules of water. The very first form of life, a microbe of some sort, may have been wriggling about on an effervescing hydrothermal vent as that molecule drifted through the abyssal depths of a long-forgotten sea. And billions of years ago, icy comets and soggy asteroids delivered that water molecule—and so many more like it—to a young world named Earth. But where did all that water originally come from? Most of the matter we interact with, made from plenty of the elements on the periodic table, was forged in the cataclysmic final seconds of countless stars that had exhausted their supplies of nuclear fuel. Hydrogen and oxygen, the two atomic components of common water, aren’t rare—and after enough stars had died in our corner of the Milky Way, it’d have a decent supply of water. But how old could some of that water be? Where, and when, did the very first droplets of water in the history of the universe form? Telescopes looking at the farthest reaches of space have found that abundances of water existed less than two billion years after the Big Bang. But a recent study, published in the journal Nature Astronomy, suggests something rather explosive: Water may have been present as early as 100 million to 200 million years after the universe came to be. According to the authors’ simulations, huge volumes of it were formed very close to, or at, cosmic dawn—the moment the very first generation of stars set the dark skies ablaze with light. It's difficult to overstate just how surprisingly early to the party this water may have been. “This suggests that water, the primary ingredient for life, existed even before the building blocks of our own galaxy were formed,” says Muhammad Latif, an astrophysicist at the United Arab Emirates University, and one of the study’s authors. There are some major caveats to this research. The team didn’t detect this ancient water; they used simulations of an as-yet-unseen type of star to understand how early on that water could have formed under certain conditions. But thanks to the high fidelity of these simulations, if these primordial stars were around at cosmic dawn, then this is probably how they would have died—with a bang, and a splash. “The simulations are state-of-the-art. So yes, the results are reliable and believable,” says Mike Norman, a physicist at the University of California, San Diego, who was not involved with the new research. And if these virtual recreations of stellar self-destruction are windows into the very distant past, then that might also mean our own waterlogged, paradisiacal world is just one in a considerably long line of oceanic planets. “The dense water cores are potential hosts of proto-planetary disks which may even lead to habitable planets forming at cosmic dawn,” says Latif. “In nutshell, life could have originated much earlier than previously thought.” The cosmos is built by chaos. Stars inevitably die, in a variety of spectacular ways, and in doing so create then scatter a multitude of elements out into space. The most violent of these deaths are associated with truly giant stars and are known as supernovas—explosions that sometimes outshine entire galaxies. Sometimes these stars simply burn through all their internal fuel reserves and implode under their own immense gravity. Other times, a voracious star eats too much of a companion star nearby and gives itself a destructive bout of thermonuclear indigestion. Either way, supernovas produce a bevy of elements, from the lighter common ones to the rarer heavier ones. As I write this, I find myself glancing at my wedding ring. It’s made of tantalum, a blueish-silver metal. It may have been mined somewhere on Earth in the not-too-distant past, but originally, it was molded in the heart of an expiring star—either a smaller one that had ballooned into a red giant or a giant crucible that ignited into a supernova. That ring may be a symbol of affection in the extreme, but it’s also the shiny wreckage of a cosmic lighthouse. Water is also a byproduct of star death, but comparing it to something like tantalum might seem odd. After all, water is pretty much everywhere we look, from Earth’s oceans to the solar system’s myriad icy moons, all the way out to distant planets orbiting alien stars. In today’s universe, forming water is also quite easy: All one needs to stick two run-of-the-mill hydrogen atoms to one oxygen atom in a sufficiently cold patch of an already frigid universe. But it wasn’t always so effortless to keep the cosmos hydrated. Unless you formed a lot of water everywhere all at once, cosmic radiation and the high temperature conditions around exploding stars would threaten to disintegrate all those water molecules long before any seas had a chance at forming. Along with his colleagues, Latif was curious: When, exactly, was water first able to emerge? Naturally, their thoughts turned to the very first furnaces in the universe. Around 400,000 years after the Big Bang, the first hydrogen and helium atoms popped up before being sucked into pockets of so-called dark matter. Once in those pockets, those atoms were squashed by gravity. Eventually, they were so thoroughly compressed that nuclear fusion got going—and boom, the very first stars lit up the universe. Astronomers have decided to give these primordial stars a counterintuitive name: Population III stars. Population II stars are the descendants of Population III stars, crafted from their detritus, while newcomers like our sun are known as Population I stars. They may have a bit of silly name, but Population III stars are remarkably important. As Latif and his colleagues write in their recent study, these stars, and their supernovas, “were the first nucleosynthetic engines in the universe, and they forged the heavy elements required for the later formation of planets and life.” These stars were supermassive, and they burned brightly and swiftly; they existed for just a few million years—not billions of years, like many contemporary stars—before blowing themselves to smithereens. A notable point of contention is that Population III stars are theoretical. Even the almighty James Webb Space Telescope, which can see farther out in space—and further back in time—than any other observatory, has yet to see any clear evidence (direct or indirect) of a Population III star. Perhaps one day it will. Perhaps it won’t. But the astronomical community suspects that these primordial stars, or something very similar to them, do exist at cosmic dawn. This means that, as they try to hunt them down, astrophysicists enjoy using computers to simulate their births and deaths—and what the consequences of this life cycle may be. This recent study, which does just that, studied two theoretical Population III stars: one 13 times as massive as the sun, and one 200 times as massive. The smaller star burned for just 12.2 million years, while the gigantic one persisted for just 2.6 million years, explains Daniel Whalen, a cosmologist at the University of Portsmouth in England and one of the new study’s authors. Both ended their lives spectacularly, via two slightly different types of supernova. A hail of blinding light was followed by a halo of debris rocketing out in all directions. At first, both halos were remarkably hot—too hot for the oxygen and the hydrogen to mix. “Gas needs to be cooled down first before water can form,” says Latif. Instead, all this matter spent several million years flying out into the darkness. But after a while—two million to three million years for the gigantic star’s supernova, and 30 million years for the smaller supernova—the debris halo became sufficiently chilled. The halo’s outward expansion experienced some turbulence, creating swirls that gathered mass, creating gravitational traps that drew in even more mass over time. The oxygen and hydrogen in those dense, cold traps were then able to bond—and water began to precipitate. If all the water from the smaller supernova were weighed, it would be equivalent to one-third of the Earth’s total mass. The gigantic supernova, which ejected far more hydrogen and oxygen, created a staggering 330 Earth-masses worth of water. These simulations—whose stills represent resplendent, van Gogh-like works-in-progress—are elegant. “The results are not surprising; in fact, they are to be expected. As soon as Pop III supernovae give you heavy elements, all sorts of molecules start to form in cool dense gas,” says Norman. Making multiple worlds’ worth of water would have been incredibly easy for these fast and furious stars. Plenty of uncertainty remains, though. The typical mass of a Population III star is not yet known, which would affect their ability to manufacture water. And, lest we forget, nobody has yet scoped a Population III star. “Simulations that make predictions without having any observations to benchmark the models against are always difficult to fully trust. Slight tweaks to the implementation of the model could give you very different results,” says Renske Smit, an astrophysicist at England’s Liverpool John Moores University who was not involved with the new research. “That being said, we know that dust forms very rapidly from observations around 800 million years after the Big Bang, so it’s not difficult to believe water could form very early as well.” In other words: This result is big, if true. But if it is true, the consequences for the cosmos could be remarkable. These primordial stars didn’t just create a lot of water; they also released a lot of silicon, which binds with oxygen to form a very commonplace rock. In another study—currently a preprint awaiting peer review—by the same team, models show that, just over 200 million years after the Big Bang, in the ruins of the very first stars, planets were piecing themselves together around a second generation of stellar furnaces. And those planets had access to plenty of fresh water—water that had several routes to reach them, from comet and asteroid impacts to icy dust being imprisoned within the planets as they were being built. Just think about that for a moment. Just a few heartbeats after the beginning of everything, of both space and time, there may have been water worlds gliding around, long before there were even enough stars to form galaxies. If life took root on those oceanic worlds, and it were able to gaze upward, it would have seen a night sky staggeringly different from our own diamantine vista. None of those primeval planets exists today. Eventually, their own stars would have died, immolating or jettisoning them in the process. Much of the water forged by those original supernovas would have been broken down and destroyed, split into its constituent atoms. And each subsequent generation of planets, and stars, would have their own water recycled from the seas of their ancestors. There is, however, a possibility that some of the very first water ever made, by those impossibly ancient Population III stars, is still around today. Some may be floating out in the middle of nowhere. Some may be swept up in the creation of far-flung planets. Not too long ago, I was outside, it was raining, and several droplets fell on my hand and trickled across my wedding ring. At that moment, a humbling thought popped into my mind. I bought that tantalum ring in 2024. That tantalum fell from space 4.6 billion years ago, along with much of Earth’s water. Those raindrops were fresh—but maybe, just maybe, a single drop contained one solitary molecule of water that was formed in the explosive final moments of a star that lived 13.6 billion years ago. Who knows? Perhaps the next time you’re out in the rain, the memory of a star from cosmic dawn will fall on you, too. Source of the article

GOATReads: Philosophy

The last letter

Condemned to death by firing squad, French resistance fighters put pen to paper. Their dying words can teach us how to live On a wintry day in Bordeaux, France, I took refuge from the rain inside a cosy bookshop stacked to the ceiling with books. Place Gambetta, Bordeaux’s iconic square framed with majestic 18th-century limestone façades, was under construction. ‘It’s always like this,’ the owner told me with a disparaging glare. I was not sure if the comment was directed at the rain or the construction. Inside, I browsed the shelves, soaking in the titles one by one. A book cast among thousands caught my eye: La vie à en mourir: lettres de fusillés (2003). It contained farewell letters of those shot by Nazi firing squads during the German occupation of France in the Second World War. I picked it up, opening the pages slowly and carefully as if I held in my hands a fragile treasure, like ‘this butterfly wing’ which the 19-year-old Robert Busillet, executed for his role in an intelligence-gathering and sabotage network, bequeathed to his mother ‘en souvenir de moi’, to remember him by. I flitted through the pages, reading flashes of a letter here, longer passages there. As someone who studies war, I am no stranger to the theme of killing and dying. But this experience was different. Last letters are unlike any other type of writing I have ever encountered. They are of a singular ilk because they peer into the souls of those confronting imminent and inescapable death. Different from everyday letters, diaries, memoirs, political tracts or philosophical treatises, because of the urgency that shapes the act of writing. The authors know there will not be another chance to say what must be said. Each last letter is uniquely personal, yet there is a universal feel to them, almost as if they paint a naked portrait of the human condition. To read them incarnates the phrase penned by Michel de Montaigne. ‘If I were a maker of books,’ he wrote in the 16th century, ‘I would make a register, with comments, of various deaths. He who would teach men to die would teach them to live.’ Dawn breaks on your final morn. A prison guard hands you a blank sheet of paper and a pen two hours before your execution by Nazi firing squad. The customs and traditions of the time – sometimes, but not always, respected by the Nazi authorities – permit the condemned a final act of communication: the last letter. To whom do you write? What do you say, knowing this is the last chance to say it? It’s not just the heroic resistors whom the Nazis executed. One could be killed for far less. In the autumn of 1941, the Militärbefehlshaber in Frankreich – the military commander who controlled Paris – enacted the ‘hostage code’, whereby all those in a state of incarceration are considered to be political hostages. In the event of a ‘terrorist attack’ – an act of armed resistance against the occupier – these political hostages could be executed in reprisal. In other words, those arrested and imprisoned for, let’s say, writing or distributing illegal tracts and newspapers, protesting in the streets, or even listening to news from forbidden radio sources such as the BBC were, effectively, handed death sentences-in-waiting. I’ve read hundreds of last letters, written by armed resistors and political hostages alike. One day, I sat down to catalogue the ways in which the soon-to-be executed communicated to their loved ones the macabre news. It was an uncomfortable, but deeply moving, task. ‘I can give no longer any further testimony of my affection than this letter,’ began Robert Beck, the head of an active terrorist organisation, according to the Gestapo. ‘Colvert will never again see his Plouf, nor his little Plumette. He is leaving for a big big journey,’ he added, softening the blow for his children. Jacques Baudry, who had resisted the Nazis since his high-school days when he organised protests and marches, later participating in armed attacks against the occupiers, was rather blunter in his letter to his mother: ‘They are going to rip me from this life that you gave me and that I clung to so.’ Huynh Khuong An, a young high-school teacher arrested for possessing anti-fascist propaganda and related clandestine activities, was plucked from the cistern of political hostages one sunny October day. Writing to his lover, he implores: ‘Be courageous, ma chérie. It is no doubt the last time that I write you. Today, I will have lived.’ This turn of phrase, so simple grammatically speaking, is deceptively philosophical because it captures the interval that separates the writer from the reader, the one who will have lived from the one who lives on. Death was no longer on the horizon. The moment was decided, imminent and irrevocable. To read the letters is to take a journey inward, deep into the world of emotions at the very frontier of living and dying. In one’s final moments, superficiality cuts away, revealing something meaningful and deep about the human condition. From Montaigne: In everything else there may be sham: the fine reasonings of philosophy may be a mere pose in us; or else our trials, by not testing us to the quick, give us a chance to keep our face always composed. But in the last scene, between death and ourselves, there is no more pretending; we must talk plain French, we must show what there is that is good and clean at the bottom of the pot. The last letters communicate what this something, at the bottom of the pot, is. One of the most powerful theories to explain how humans face up to their own mortality was hypothesised by the American psychiatrist Elisabeth Kübler-Ross in her groundbreaking book On Death and Dying (1969). When an individual learns of their impending death, they navigate among five stages of grieving, trying to come to terms with their own mortality: denial, anger, bargaining, depression and acceptance. Kübler-Ross observed terminally ill patients with a limited time horizon. For those killed by the Nazis, that interval was often condensed to the time allotted to write a letter. The last letters offer a raw portrait of grieving one’s own demise. Few of the condemned deny their fate. Some remain entrenched at the phase of depression. Others skip a phase, or oscillate between anger and acceptance, acceptance and depression. A surprising number traversed all the phases. And almost everyone bargains. Bargaining means asking the question: what would I do, if only I had more time? Montaigne would have us focus on the passages related to bargaining because these, by showing us what is at the bottom of the proverbial pot, teach us to live. If the last letters are any proof, the adage that your life passes before your eyes has some truth to it. It’s not the classic image of an entire lifetime; it’s more like watching old movie reels of favourite moments. ‘I do not feel the need to sleep,’ explains Arthur Loucheux – a well-known anti-militarist and leader of a miners’ strike – to his brother at 2 am of his final night, ‘not out of fear, but to remember my life, because to sleep, bah! won’t I have time [to do so] very soon?’ Tony Bloncourt, or ‘petit Toto’, who was part of a youth battalion and partook in armed resistance, recounts to his parents: ‘My entire past comes to me in a flash of images.’ A life of 21 years. Was he thinking, as he wrote, of the years he would not live to see? As I read their words, I’m hit by a flash from my own past. It’s a story I often tell my students who are planning to study abroad because it depicts a quintessential encounter between me, a culture and its language. It’s a story about the little details that convey so much about local history, hiding in plain sight. There is a last-letter link, too, though at the time I did not know it. I was on my way to a lunch, navigating still-unfamiliar streets to my destination, at the crossroads of rue de Vouillé and rue Georges-Pitard in the quaint 15th arrondissement of Paris. The names meant nothing to me back then. I was oblivious to the stories that marked the public spaces I transited and inhabited. I had just arrived in France and was still learning the language. To help me practise my grammar skills, someone had the bright idea to impose a very peculiar rule: every spoken sentence had to employ the subjunctive in some way or another. Any basic French grammar book will tell you the subjunctive is used to indicate some sort of subjectivity, or uncertainty, in the mind of the speaker. Feelings of doubt and desire, as well as expressions of necessity, possibility and judgment. The subjunctive inhabits many last letters. Georges Pitard’s letter to his wife, Lienne, begins with the subjunctive, used as an expression of necessity: ‘It is necessary for you to be extremely courageous, because this time misfortune is upon us; it flashed like lightning and it strikes us.’ Pitard, I would eventually learn, was a lawyer who defended those unjustly imprisoned at the beginning of the occupation and was arrested for it. A man of principle: ‘I only did good, thought of easing misery,’ he wrote in his last letter to his wife before being executed as a political hostage. ‘But for some time now the elements are raging and everything conspires against men like me.’ Knowing these details adds a layer of meaning to my memory and its resonance, with the last scene playing out again and again each time I tell the story. Pitard’s final words always the same. We can imagine a 40-something Pitard in his cell writing these words as time inexorably ticks and tocks. He seems to regret that ‘we quarrelled a few times, hurt each other for trifles’. As the execution looms ever closer, he bargains with time. Remembering the past, perhaps in shock that tomorrow will not be just like yesterday, he writes: ‘This evening, I think of your sweetness, your kindness, of our sweet moments, those from long ago and those of yesterday, know well, my darling, one could not love you more than I did.’ He seeks one final escape from the fate that awaits him, in a place where everything is pure love, where nothing else exists except dreams of her: ‘And I will fall asleep with your sweet image in my eyes and the taste of our last kisses that are not that distant, my sweet friend, my gentle little Lienne. Be sensible … Be reasonable. Love me, for a long time yet.’ The subjunctive again. Expressions of desire and longing. When time seems like an infinite plain before us, we take the days ahead for granted. There will always be time to do the things that matter most. Too often, maybe, these are a small part of a bigger canvas often dominated by other priorities. Time duly runs its course, and the letter comes to an end, but not before ‘Geo’ adds a postscript: ‘I kiss passionately your photograph and press it to my heart, the first [photo] of our youth, and the one from Luchon in which you are wearing flowers.’ I imagine him in the dark of night, pressing his lips to the photo. Reliving the memories. When Lienne reads his letter, Georges will have lived. Despite the raw emotion of the last letters, it’s hard to imagine that the elements, raging, will conspire against me. Psychologically, as humans, we flee from the idea of the world carrying on without us. We push the fact of dying deep into our subconscious. Instead, we take comfort in the naive belief that tomorrow will be like yesterday, and so on, and so forth. Such is the power of denial. I remember the exact moment when the façade of denial began to crumble. To plunge deeper into the ambiance of the dark years of the Nazi occupation, I searched out other writings from the time. I found a copy of La patrie se fait tous les jours, an anthology of texts from the French intellectual resistance. It was a first edition. The pages were crisp, still uncut, as if the book had just come off the printing press. Except it had been published in 1947, less than three years after France was liberated. To leaf through the pages required, first, slicing them apart. The same movement one makes to open a letter, it turns out. It was a slow and meticulous process. I dutifully opened them, lingering to read a poem by the resistance poet Paul Éluard – ‘Liberté’ (1942) – until I arrived at page 111. There, as I carefully opened the next few pages to reveal the last letter of Daniel Decourdemanche (known by the pseudonym of Jacques Decour) – a French professor of German literature in his 30s, living in Paris – something happened. Psychologically, it was like the floor fell out from under me, plummeting me into the tumult of the times. Decourdemanche was part of the intellectual resistance. His crime, which led to his May date with a Nazi firing squad, was to organise and distribute underground magazines, whose purpose was to rally intellectuals to the anti-fascist cause, and to inject some humanism into news cycles gorged with nationalist and divisive propaganda. In his last letter, tempted to imagine what might have been had he had more time, Decourdemanche writes to his parents: ‘I dreamt a great deal, this last while, about the wonderful meals we would have when I was freed.’ But he accepts these experiences will not include him: ‘You will have them without me, with family, but not in sadness.’ Instead of regret, his mind drifts to the meaningful experiences he did live: ‘I relived … all my travels, all my experiences, all my meals.’ And at the end: ‘It is 8 am, it will be time to leave. I ate, smoked, drank some coffee. I do not see any more business to settle.’ I sat there, moved but immobile, staring at these last lines, then at his signature. ‘Votre Daniel’, your Daniel. I had the strange impression of looking in a mirror, of staring death in the face. Another Daniel, also a humanist in a world of inhumanity and ruthless self-serving politics. Reading his words, I drift across the thin frontier separating the past from a parallel world. In reading how he and others confronted death, in bearing witness to their fears, hopes, joys and regrets, I am instinctively transported to an analogous moment. To whom would I write? What would I say? Am I ready to die? What would I bargain for? That’s what the last letters do, they open this frontier and beckon us to cross. Montaigne counsels his readers to come to terms with death by learning to no longer fear it. This has a liberating effect, according to the old sage, because it allows us to be more in tune with ourself while we are among the living. The trick is to cultivate what is at the bottom of the pot long before the final act. Reading the last letters allows us to play such a trick on time. For we, the readers, are still in the world of the living. We are not yet part of those who, when the ink dries on the page and it is read by loved ones later, will have lived. Maybe we do not know what, when the time comes, we might bargain for. But the last letters tell us what those on the other side of life wanted, what they bargained for, at death’s door. The verdict had fallen. Forty-one-year-old André Cholet, condemned to death for running the radio counter-espionage wing of a major resistance group, had just seen his wife for the last time. He recounts the scene in his last letter: I still have the time to talk to you ma petite, as if you were still here close to me, on the other side of the wire mesh. For this last day you were beautiful like you had never been before and oh what grief is now yours. I would like to be in this moment still. Bargaining to be there in that instant. To see her eyes, her smile. To smile back. To soak up all the non-verbal gestures that define a person, a loved one, her. To blow a kiss. How seldom do we remark these moments in normal times? They seem unremarkable when lived day to day, but in the last scene, between death and oneself, the emotions, hopes and regrets that comprise the human condition are heightened a thousandfold. What if we were attuned in such a way that daily encounters with loved ones were heightened a thousandfold? Or even just tenfold? Bargaining is the bedfellow of regret. Twenty-one-year-old Roger Pironneau was not sorry for the espionage that led to his arrest. He does not regret resisting. But, writing to his parents, he is sorry ‘for the suffering I caused you, the suffering I am causing you, and that which I will cause you. Sorry to everyone for the evil that I did …’ And he is sorry ‘for all the good that I did not do’. I imagine his mind wandering – let’s be clear, even though there is no chance, no illusion, of actually having more time, it wanders toward a question we readers can still pose: if only I had had more time, what good could I have done? Last letters are finite. They contain the words that fit the page allotted, and no more. What is not written remains unsaid. Arrested for acts of sabotage and other clandestine activities, Maurice Lasserre composes his last letter to his wife, Margot. He signs his name one last time, with the unique characteristic furls that make his signature his. There is just enough space for a final PS: ‘I close the envelope by cherishing you and kissing you for the last time, again good kisses. I send you my wedding ring and a lock of hair that you will keep in memory of me …’ As he folds the letter to place it in an envelope, something unexpected happens. ‘They are giving me more paper,’ he notes below his signature, before continuing on a fresh page. ‘I take advantage to write to you again and to kiss you still once more …’ One more gesture of love. ‘And the little ones, and the older ones, too.’ Lasserre writes on. A message for each of his children. And one more thought destined for Margot: ‘Still more kisses and think that I am yours, even in face of the death that is coming.’ Another sheet of paper is like a new day, though if we thought it might be the last, perhaps our perception of the most ordinary of gestures would change. Bargaining exposes the raw core of what gives meaning to the everyday gestures. When we are young, we think there will be an infinite number of blank pages upon which to write our story. Twenty-something Claude Lalet found himself, the morning of his last day in the world of the living, writing to his new bride. Sure, he was active in various protests, which led to his arrest. But it was never supposed to end like this, being executed as a political hostage in reprisal for the assassination of a German officer by the armed resistance. In the back of a truck on the way to the quarry where he is to be executed, he composes himself: ‘Already the last letter, and already I have to leave you!’ The repetition of the word ‘already’ betrays his anger; it’s simply not fair, his fate. But Lalet does not want to dwell in anger. Focusing on the beauty around him, he observes in poignant prose: ‘Oh the road is beautiful, ah, truly!’ As the truck rumbles forward and reality sinks in deeper, he battles to keep his bitterness at bay. What was it that made life so wonderful? ‘I know I must clench my teeth. Life was so beautiful; but let us hold on to, yes hold on to our laughs and our songs …’ Lalet has every reason to be bitter, but the final lines of his last letter suggest that, deep down, he realises that anger, however valid, is empty sustenance: ‘Courage, joy; immense joy … I love you always, constantly. I kiss you, I hug you with all my strength. Long live life! Long live joy and love.’ All those whose letters are cited above died at the hands of the authoritarian state. They came from all walks of life and diverse political backgrounds. Some took up arms to fight back, while others resisted non-violently, or were simply caught up in the repressive nets of the state. I reread their last letters in parallel to the newsfeeds that, every day, bring ubiquitous headlines stirring nationalistic and xenophobic sentiments. Even if I cannot quite wrap my head around the absurdity of being in a position of writing my last letter, there is foreboding in the air. Instinctively, I look for parallels in the past, drifting back across that frontier the last letters have opened to me. Daniel Decourdemanche wrote in his diary in 1938 on the eve of the infamous Munich Agreement: One prepares oneself, one ponders about what is to come, about what must kill us without our being able to have a gesture of defence, but it will maybe take a long time, like all incurable maladies. Waiting so long for the inevitable, this is the test. The diary entry is a prescient bookend to his last letter penned in 1942, before he was executed in the glade at the sinister Mont-Valérien fortress on the outskirts of Paris. As he watched the forces of history unfold, Decourdemanche was no doubt thinking of the possibility of his own death – a life cut short by the tumult of the times. ‘How to find your way around?’ he asks, in a world in which humanism is a bad word, where vitriol is the coin of the realm. Where the dykes of civility and tolerance that once kept fanaticism at bay have burst. Where there is power in hating the other, in calling the other names, in blaming the other for all our problems. As if doing so acts as a shield against whatever may come. ‘The strong who face this test,’ he proffers, ‘are not those we expect.’ Falling in step, toeing the line of intolerance, embracing the newly emboldened toxic masculinity? No. ‘The strong,’ Decourdemanche surmises, ‘are those who loved love more than everything else.’ ‘It is the right time for us to remember love,’ he tells himself. ‘Have we loved enough,’ he asks? ‘Have we spent several hours a day marvelling at others, being happy together, feeling the price of contact, the weight and value of hands, eyes, the body? Do we still know how to devote ourselves to tenderness?’ These are formidable questions. Once you realise that your days are numbered, that other emotions are competing for time and space in your life, answering them offers a chance to reorient yourself amid all the noise and contempt: ‘It is time, before disappearing in the trembling of an Earth without hope, to be entirely and definitely love, tenderness, friendship, because there is nothing else. One must swear to only care about loving, to love, to open your soul and hands, to look with the best of your eyes, to hold what you love close to you, to march without anguish, radiating tenderness.’ Back in the 21st century, this Daniel wonders how many people around him are having the same existential thoughts. Would it make a difference if everyone confronted their own mortality in earnest? Thinking of the bottom of the proverbial Montaignian pot amid the constant brouhaha, the rhetoric, the posturing and pretence of a world clutching at madness, I ask myself the question that those who can still bargain for time should ask: how might I live my life differently? Source of the article

Human Computers: The Early Women of NASA

These ground-breaking female mathematicians, engineers and scientists produced calculations crucial to the success of NASA's early space missions. Barbara “Barby” Canright joined California’s Jet Propulsion Laboratory in 1939. As the first female “human computer,” her job was to calculate anything from how many rockets were needed to make a plane airborne to what kind of rocket propellants were needed to propel a spacecraft. These calculations were done by hand, with pencil and graph paper, often taking more than a week to complete and filling up six to eight notebooks with data and formulas. After the attack on Pearl Harbor, her work, along with that of her mostly male teammates, took on a new meaning—the army needed to lift a 14,000-pound bomber into the air. She was responsible for determining the thrust-to-weight ratio and comparing the performance of engines under various conditions. Given the amount of work, more “computers” were hired, including three women Melba Nea, Virginia Prettyman and Macie Roberts. Macie Roberts was about 20 years older than the other computers working at JPL. Coming to engineering later in life, she was meticulous and driven, rising through the ranks and becoming a supervisor in 1942. When tasked with building out her team, she made the decision to hire only women, believing men would undermine the cohesion of the group and not take direction well from a woman. Roberts set a precedent for future female supervisors who made it their job to hire women, often taking a chance on young women right out of college. Helen Ling was one such supervisor who followed in Roberts’ footsteps. Ling actively hired women who didn’t have an engineering education, encouraging them to attend night school. At a time when maternity leave did not exist, pregnancy could be detrimental to a woman’s career. Way ahead of her time, Ling offered her employees her own version of unpaid maternity leave, rehiring them after they had left to give birth. Barbara Paulson began working at JPL in 1948 when calculating a rocket path took all day. On January 31, 1958, she played a role in the historic launch of the JPL-built Explorer 1, the first successfully launched satellite by the United States. She was tasked with plotting the data received from the satellite and a network tracking station. It was Paulson and her fellow human computers that hand-charted America’s entrance into the Space Race. Paulson left JPL to have her first daughter, and thanks to Ling’s unofficial unpaid maternity leave, returned in 1961. In the 1950s, NASA was starting to work with what we now know as computers—but most male engineers and scientists did not trust these machines, believing them to be unreliable in comparison to human calculations. Dismissing computer programming as “women’s work,” the men gave the new IBMs to the women of JPL, providing them with a unique opportunity to work with and learn to code, computers. It comes as no surprise then that the first computer programmers in the JPL lab were women. They became attached to a specific IBM 1620, nicknaming her CORA and providing her with her own office. After graduating in 1953 with a degree in chemical engineering from the University of California, Los Angeles, Janez Lawson had the grades, degree and intelligence to get any job she wanted. The problem? Her race and gender. She responded to a JPL job ad for “Computers Wanted” that specified “no degree necessary,” which she recognized as code for “women can apply.” While it would not be an engineering position, it would put her in a lab. Macie Roberts and Helen Ling were already working at JPL, actively recruiting young women to compute data and Lawson fit the bill. Lawson was the first African American to work in a technical position in the JPL lab. Taking advantage of the IBM computers at their disposal, and her supervisor’s encouragement to continue her education, Lawson was one of two people sent to a special IBM training school to learn how to operate and program the computers. A remarkable group of African American women, working at what would become NASA’s Langley Research Center in Virginia, were breaking down their own gender and racial barriers. Dorothy Vaughan joined the team in 1943. Already having to ride in the colored section of a segregated bus, she was put to work in the “colored” computers section. In 1951, Vaughan became the first African American manager at Langley and started, like her cohorts on the West coast, to hire women. That same year, Mary Jackson joined her team, working on the supersonic pressure tunnel project that tested data from wind tunnel and flight experiments. Katherine Johnson—who was awarded the Presidential Medal of Freedom in 2015 by President Barack Obama—joined the team at Langley in 1953. A physicist, space scientist and mathematician, Johnson provided the calculations for Alan Shepherd’s historic first flight into space, John Glenn’s ground-breaking orbit of the earth and the trajectory for Apollo 11’s moon landing. One of the earliest human computers still works at JPL. Now 80 and NASA’s longest-serving female employee, Sue Finley was originally hired in 1958 to work on trajectory computations for rocket launches and is now a software tester and subsystem engineer. She is currently working on NASA’s mission to Jupiter. Her legacy, and that of the other early human computers, is literally written in the stars. It was the careful and precise hand-made calculations of these women that sent Voyager to explore the solar system wrote the C and C++ programs that launched the first Mars rover and helped the U.S. put a man on the moon. Though rarely seen in the famous photos of NASA’s mission control, these early human computers contributed immeasurably to the success of the United States space program. Source of the article

GOATReads: Psychology

When Harmony Hides Loneliness

What China reveals about belonging and connection. "Good relationships are those where people remember your needs without you asking." This definition of connection from a Chinese research participant captures something profound about how relationships work in China. Connection isn't performed through declarations of affection or scheduled check-ins—it's demonstrated through attentiveness to unspoken needs, through the quiet knowing that comes from deep familiarity. But what happens when the conditions that make such knowing possible—shared history, geographic proximity, generational continuity—are disrupted? Our eight-country investigation of social connection (N=354 across Brazil, China, India, Morocco, the Philippines, Turkey, the United States, and Zimbabwe) included interviews with 20 Chinese participants across different ages, regions, and loneliness levels. What emerged challenges the Western assumptions about loneliness as primarily a relationship problem. In the China interviews, place-based belonging and continuity emerged as a major dimension of how connection and loneliness were experienced. The Geography of Belonging When Chinese participants described feeling disconnected, they didn't just mention missing people. They talked about missing where they're from. "First of all, it is my hometown," one participant explained. "I have been in my hometown for 19 years, so I have a very deep sense of connection to my hometown, and I usually pay attention to news about my hometown when I read it." Another described returning to their university campus years after graduating: "When I saw the original teaching building and dormitory building, I felt extremely affectionate. I think it might be because seeing these objects reminded me of people and moments from the past—I was nostalgic for those earlier times." This isn't mere nostalgia. One participant offered a useful distinction: loneliness involves both gu (isolation from one's roots) and du (being alone). You can be surrounded by people yet profoundly lonely if severed from your geographic and ancestral anchors. In southeastern coastal regions, entire villages constitute interconnected clans sharing surnames and ancestry. These bonds are continuously reinforced through ancestral worship rituals, Spring Festival reunions, and communal practices at ancestral halls. When migration for work requires leaving these contexts, something more than "social ties" is lost—an entire framework of belonging disappears. The Festival Test The depth of place-based belonging becomes most visible during festivals. Multiple participants mentioned feeling acute loneliness, "as the Spring Festival approaches and I haven't been home for a long time." One participant emphasized: "In our country, even if the atmosphere in the original family is not good, people still need to return to their hometowns during specific holidays like the Spring Festival." These aren't optional social gatherings. There are moments when ancestral continuity, family obligation, and geographic rootedness converge. Missing them doesn't just mean missing a celebration—it means being cut off from the very mechanisms that constitute belonging in Chinese culture. Another participant explained it this way: "In China, when there are elderly family members, children will sit down for a meal together during the New Year—no matter how infrequently they've contacted each other." The obligation transcends current relationship quality. It's about maintaining the unbroken chain. Silent Suffering Perhaps most striking: Chinese participants rarely discussed loneliness openly, even when experiencing it intensely. "I think loneliness is quite abstract, and people often don't know how to describe it in words," one explained. Another pointed to deeper cultural barriers: "As I said, many people instinctively believe that loneliness is something negative. They associate it with a lack of social skills or a flaw in one's personality." In a collectivist society that values group harmony and social connection, admitting loneliness suggests personal failure. One participant noted: "I feel like in China, people often label others as 'lonely,' which is kind of disrespectful." The stigma creates a vicious cycle. Loneliness stems partly from the inability to communicate effectively with others. But if loneliness itself prevents disclosure, help-seeking becomes impossible. As one participant observed: "Loneliness itself stems from a state of being unable to effectively communicate with those around you." This silence has consequences. Most participants described loneliness psychologically—as emotional absence, helplessness, emptiness. But a subset also reported physical spillovers: constant illness, exhaustion described as "like my batteries are permanently dead," disrupted sleep and eating patterns. Yet formal help-seeking remains rare. Only one participant mentioned wanting to see a psychologist—but found it "too expensive." When Relationships Remain Superficial Paradoxically, China's emphasis on social connection can itself produce loneliness. Multiple participants described a contradiction: lots of social activity, little genuine connection. "In China, social connections are often superficial, with everyone wasting large amounts of time and energy on social connection performance, which in turn deepens people's sense of loneliness," one explained. Another described China's elaborate drinking culture at banquets: "I think many people actually feel lonely at such banquets, as if forced to learn a lot of drinking table rituals." A third noted: "In fact, in our country, there's often a superficial hustle and bustle—like during the Lunar New Year, when it looks like everyone is reuniting, but in reality, there isn't much meaningful communication." The performance of connection—showing up, following rituals, maintaining appearances—doesn't guarantee its reality. This matters because interventions focused solely on increasing social contact may miss the deeper issue. Beyond Human Connections Like our Indian participants, Chinese participants also revealed something Western loneliness research often overlooks: People feel connected to more than just other humans. When asked what else makes them feel connected, pets emerged most frequently—often described as providing stronger bonds than family members. One participant shared: "I have a lot of examples around me, like one of my best friends, she got a puppy about a year ago, and she has a very strong connection with her puppy... when she goes to study abroad now, she makes special trips home because of her puppy, not her family." Nature also figured prominently. And places—especially hometowns and cities visited during travel—evoke powerful feelings of connection tied to memories and experiences. What This Means for Addressing Loneliness China's experience reveals that effective loneliness interventions must account for cultural frameworks of belonging. In contexts where connection is rooted in place, ancestry, and ritual participation: Measurement tools should incorporate place-based belonging. Standard loneliness scales asking about relationship quality miss the geographic dimension central to the Chinese experience. Interventions should recognize migration's unique impact. Moving for opportunity doesn't just mean missing people—displacement and inability to participate in return rituals trigger loneliness even when people remain socially connected in other ways. Creating acknowledgment spaces matters. In cultures where loneliness carries shame, safe forums for honest discussion become crucial—but these must be culturally adapted, not imported wholesale from individualist contexts. Festivals and rituals aren't optional. Policies affecting when people can return home (work schedules, travel costs, migration restrictions) directly shape loneliness by determining access to the very occasions that reinforce belonging. The solution isn't more socialization. If connection is already performative and superficial, adding more social events intensifies the problem. The question is how to create conditions for genuine rather than ritualized connection. The Bottom Line Behind China's cultural emphasis on harmony and social connection lies a more complex reality: rapid modernization and migration have disrupted the geographic and ancestral roots that traditionally anchored belonging. The result is a distinctly Chinese pattern of loneliness—often silent, deeply tied to place and ritual participation, and sometimes masked by outward social performance. Every country has its own architecture of belonging. Addressing loneliness effectively means understanding and respecting these structures rather than imposing universal solutions. For China, that means recognizing that the migrant worker missing Spring Festival at home isn't just lonely for family—they're experiencing a rupture in the place-based continuity and ritual participation that makes belonging possible. Source of the article

GOATReads: History

What Can We Learn From Apocalyptic Times of the Past?

More than a millennium ago, a Maya community collapsed in the face of a devastating drought. One writer joined an intrepid archaeologist to upend what they thought they understood about why it all happened Ancient glyphs, forgotten temples, abandoned cities decaying in the jungle: The collapse of the Maya Empire is one of archaeology’s most notorious apocalypses. It’s also been presented to us as one of the most mysterious, and the most frightening. Millions of people seem to simply disappear from the archaeological record, in one city after another, during a time of devastating drought. It sends shivers down our spines—and tourists flocking to the ruins. But as I researched the Classic Maya (250-900 C.E.) collapse for my book Apocalypse, I came to think this was the past apocalypse that most resembles the intertwined crises we’re facing today. The way it turned out shouldn’t scare us. It should inspire us to take the future into our own hands. Whether it’s driven by human or natural forces, or a combination of both, apocalypse is a rapid, collective loss that fundamentally changes a society’s way of life and sense of identity. Archaeologists use objects, buildings and bones to reconstruct these cataclysmic events and the cultures and people they affected. They can see what happened before a world-shattering event, and they see what happened after. They can see the trends that made a society vulnerable, and they can see how survivors regrouped and transformed. For archaeologists, the most important piece of the story is not the apocalypse itself but how people reacted to it—where they moved their settlements to escape, how they changed their rituals to cope, what connections they made with other communities to survive. That’s where they find countless stories of resilience, creativity and even hope. And that’s what I saw in the Maya world, especially in a mysterious ancient city called Aké. Aké is less than an hour’s drive from Mérida, the capital of Mexico’s Yucatán State, but it feels like a world apart. During a recent visit, even the archaeologist Roberto Rosado-Ramirez, who has been working in Aké for 20 years, worries that he missed the turnoff. But soon we are on the correct one-lane road, driving through a tunnel of green formed by tree branches arching overhead. We’re visiting during Yucatán’s rainy season, when the region’s scrub forest bursts to life. Pendulous, teardrop-shaped birds’ nests dangle from the branches, and kaleidoscopes of marigold butterflies flutter ahead of our car. An iguana lounges in the middle of the road, unperturbed by passing bike and motorcycle traffic. After about 15 minutes, the forest gives way to the town of Aké. Signs of its past are everywhere. The ruins of a Maya pyramid sit near first base of a community baseball field in the center of town. Nearby, in the core of the restored archaeological site, another pyramid is topped with an unusual array of ancient columns. The enormous stones used to build that pyramid indicate it may have been constructed as early as between 100 B.C.E. and 300 C.E., Rosado-Ramirez tells me. A full-blown city grew up around the sacred monument between the years 300 and 600 C.E. Aké reached its height sometime between 600 and 1100, with a population of 19,000. It was economically prosperous, connected to extensive trade and diplomatic networks, and the religious heart of its community. During his first years in Aké, working on excavations led by Mexico’s National Institute of Anthropology and History (known by the Spanish acronym INAH), Rosado-Ramirez studied Aké during this time, known as the Classic period. Events ranging from market days to religious festivals happened in the city’s plaza, which was kept meticulously clean. The spirits of gods and ancestors were believed to inhabit the temples surrounding it, and the succession of divine monarchs who lived alongside them were responsible for keeping Aké—and the rest of the Maya world—in working order through rituals (including sacrifice and bloodletting) that placated the deities and diplomacy that managed Aké’s relationship with other city-states, near and far. Merchants vied with the nobility for wealth and power, established trade routes, and facilitated the transport of goods over long distances; people moved from place to place with relative ease. That would all change—at first slowly, and then quickly—as an apocalypse rippled up from the south. At first, the people of Aké would have heard rumors of faraway warfare, in what is now the Petexbatun region of Guatemala, 300 miles south of Aké. In the mid- to late 700s, when Aké was at its apogee, nearby cities and towns suddenly turned themselves into walled fortresses, and years of excavations led by archaeologists from Vanderbilt University found evidence that some of the region’s most impressive buildings, even inside the fortifications, were attacked and destroyed. By whom, and why, remain mysterious. The region had plenty of water and doesn’t appear to have faced another environmental challenge at the time. Until the war began, the region’s population was stable, and there’s no evidence of foreign invasion or attempted conquest. Perhaps the low-level tension that typically simmered between Classic Maya city-states boiled over into outright and widespread warfare for reasons we might never know. But by 830, all the cities in the region had collapsed, and their leaders were killed or had fled. Over the next century or so, the apocalypse slowly spread and consumed once powerful Maya cities such as Tikal, in northern Guatemala, and Calakmul, in the southern part of the Yucatán Peninsula. In the 1990s, a sediment core extracted from Lake Chichancanab provided the first paleoclimatic evidence of a severe drought at the end of the Classic period. In the years since, researchers have gathered many more paleoclimate records from lakes and caves, which show drought likely hit these southern Maya cities during the ninth century, which perhaps amplified the political instability, in a classic apocalypse pattern. Perhaps, with all the conquests, alliances and intermarriages, the elite class had grown too large and unwieldy for common farmers to support in a time when food supplies were dwindling, Rosado-Ramirez suggested to me. And maybe those intertwined political and environmental pressures led to a popular revolution in philosophy and religion that undermined the justifications for monarchy, evidence of which wasn’t preserved as well as portraits of god-kings carved in stone. As the divine kings of the south fell, their cities did, too. Without the need to glorify all-powerful rulers, the construction of new buildings and monuments stopped. Because Maya artists often engraved their work with precise dates, archaeologists can see when a city’s last monument was erected, a final gasp of proven occupation before a site’s abandonment. In a handful of Classic Maya cities, most famously Cancuén in Guatemala, archaeologists have found mass graves that some think hold the remains of the former nobility who were massacred in the political transition. But in most places, it appears that the elite class simply left once their hold on power slipped. A city without a divine king didn’t have much work for priests or artists, and so they followed. Merchants, especially those who traded in elite and exotic goods, would have made their way to better markets or returned to their homelands as they waited for the next opportunity. Commoners, mostly farmers with household fields, probably would have been able to hold out the longest, if they wanted to. But eventually most people left the old cities, and they fell into ruin. Whatever news of these crises arrived in Aké—and it must have, especially once refugees started pouring out of the south—its residents may very well have felt protected by their geographic and cultural distance from the chaos. The northern part of the Yucatán Peninsula had always been the driest part of the Maya world, and so its residents would have been used to living with a certain amount of water stress. They would have known how to collect rainfall and conserve the water levels of their artificial reservoirs and natural cenotes, skills that perhaps served them well at the beginning of the dry period. Gradually, however, the strife crept closer and closer. Sometime between 900 and 1000, Rosado-Ramirez found, Aké built a wall that enclosed the temples and palaces of the city center and ran right across the 20-mile-long limestone road that had long connected it to the city of Izamál, practically and symbolically cutting off Aké from the wider Maya world. Perhaps the wall was meant to protect against attacks from the ambitious Itzá people, who were consolidating power in their capital, Chichén Itzá, or maybe Aké’s leadership wanted to exert more control over immigration as more and more refugees wandered a land struggling to produce enough food to feed them. Still, Aké remained a flourishing city through the tenth century. Even as political power shifted, the climate fluctuated and the wall went up, the apocalypse still hadn’t quite arrived. In a haunting echo of how many of us experience climate change today, it was always happening somewhere else, to someone else—until suddenly, it happened to them, too. An even more severe and prolonged drought hit between 1000 and 1100, overwhelming even the northern cities’ water infrastructure and management. By the end of that century, Aké and the rest of the cities in the northern Yucatán Peninsula had collapsed and was largely abandoned. Just as had happened in the southern cities a few centuries earlier, the elites left first, and most of their subjects dispersed in their wake. As cities collapsed, old identities and territories dissolved. Architecture and art would never reach the same monumental heights, nor would they ever require the same amount of labor. Ancient trade routes disintegrated and re-formed as shadows of themselves. Scribes and sculptors, for the most part, stopped carving significant dates into stone, giving archeologists the eerie impression that Maya history had ended. It hadn’t, of course. Most Maya people moved from cities to villages, many along the peninsula’s coast, after there were no more inland cities to try to make a life in. But thatched houses are far more ephemeral than stone pyramids, Rosado-Ramirez points out, and material evidence of many of these post-apocalyptic communities decayed long ago. Information about how and where most common Maya people remade their lives in a post-apocalyptic world has remained so hard to find that past generations of archaeologists didn’t bother looking for it—or perhaps they couldn’t even imagine there was anything there to find. In Aké today, modern houses abut the remains of ancient ones, which now take the form of small mounds on the otherwise flat landscape. Many of the town’s men work in a crumbling factory building where for over a century they—and their fathers and grandfathers before them—have processed henequen, a local succulent, into some of the world’s strongest rope. Part of the factory’s roof caved in during a hurricane in 1988, and the damage was too extensive to repair, but about half of the building is still accessible. The equipment inside is so old that replacement parts are occasionally purchased from owners of historic henequen machines otherwise on display as museum pieces. When I arrive with Rosado-Ramirez, his old friend and collaborator Vicente Cocon López is there to meet us at the baseball field with his son Gerardo Cocom Mukul. Cocon López is Rosado-Ramirez’s closest friend and collaborator in Aké. Rosado-Ramirez has known Cocom Mukul, now in his 20s, since he was in elementary school; he’d been working as the project’s unofficial photographer for many years. The four of us climb the enormous stairs to the top of the ancient pyramid arrayed with columns, now cleared of vegetation and restored to something like its former glory thanks to INAH’s restoration efforts. The ascent requires taking huge steps that leave me short of breath. The pyramid was clearly designed to be both accessible and imposing. Cocon López, who has an impressive eye for construction techniques both ancient and modern, makes sure I notice how all the enormous stones in the staircase are roughly the same size—an indication of the incredible amount of labor and skill its ancient builders brought to it. For many years, archaeologists thought Aké had been completely abandoned after the Classic period collapse. But after years of getting to know the current inhabitants of Aké, that interpretation no longer sat right with Rosado-Ramirez. All of his training as an archaeologist had taught him to focus on monumental architecture like pyramids. But if he applied the same rule to present-day Aké, he realized he’d be forced to conclude that the town had been abandoned decades earlier, when the henequen factory stopped being maintained. And, of course, that wasn’t at all what had happened. So Rosado-Ramirez started trying to see ancient Aké through the lens of modern Aké. Had everybody left after the apocalypse, or did it remain the home of a different kind of community? If anyone could help him find more evidence of post-apocalyptic construction, it was Cocon López, who knew the site better than anyone and was skilled at spotting subtle variations in building patterns and materials. Without the backing of a grant or a government-funded excavation, Rosado-Ramirez started trekking out to Aké on Saturdays to walk the site with Cocon López and see what they could find. “We didn’t even have money for water,” Rosado-Ramirez remembers. Cocon López scouted the site during the week, often following old metal tracks laid down during the boom days to move henequen in horse-drawn carts, and drew maps to guide their joint exploration on the weekends. In jumbles of old stones that, to me, are barely legible as the remains of buildings, Cocon López could see the entire timeline of old Aké and how later people interacted with and repurposed what came before. He and Rosado-Ramirez found small structures built with a mix of the huge stones of Aké’s earliest urban phase and smaller stones that came later. With his local team’s help, Rosado-Ramirez identified the remains of 96 small houses within the monumental core of old Aké and excavated 18 of them. These buildings, most often found clustered together in groups of six or so around a shared patio, were filled with ceramic styles popular during the Postclassic period of the 10th to 15th century, and they occupied locations—including the city’s once pristinely empty central plaza—where commoners would have never been allowed to live before the collapse. Perhaps they had moved inside Aké’s wall for protection during an unstable time, or maybe they simply found the old border convenient for defining and unifying their now much-smaller community. Rosado-Ramirez estimates that between 170 and 380 people continued living among the ruins of Aké during the Postclassic period. The upper end of that estimate is almost exactly how many people live in Aké today. For over 400 years, this smaller, more egalitarian, more flexible and more resilient Postclassic lifestyle worked for the Maya people of the Yucatán Peninsula. Even after the drought ended and the environment stabilized, they never again agreed to the rule of a divine king. They had tried living in that kind of ultra-stratified complex society, and it had failed them, catastrophically. It made their cities vulnerable, their politics fragile and their religion powerless when apocalypse struck. Why would they take that kind of risk again? Instead, the Maya of the northern Yucatán Peninsula built a different kind of capital, starting around 1100. Mayapán wasn’t the seat of a god-king but rather the meeting place for a confederacy composed of representatives from powerful families and polities across the peninsula. Isotopes in the bones of people buried there show that the city attracted people from far and wide, both commoners and elites. Perhaps that’s where the rulers of Aké ended up after they fled: as junior members of Mayapán’s council. Like Aké at the end of its first phase of life, Mayapán built a city wall. Unlike Aké, its enclosed center was densely occupied with both monumental buildings and commoner neighborhoods. Mayapán was home to several imposing pyramids, but no single central plaza like the one in front of Aké’s column-topeed pyramid. No one person or group controlled Mayapán, and so they didn’t need a massive public space where their followers could gather to hear their proclamations. Politics happened in the compounds of the council’s most powerful constituencies, and religious practices were more personal, with altars and incense burners inside people’s homes. Without the need to glorify individual rulers, scribes wrote books instead of carving history into stone monuments, and architects designed pyramids that were much easier to build. Unlike in ancient Aké, people in Postclassic Mayapán didn’t need to spend their time shaping and precisely arranging huge stone blocks into elegant, perfect staircases and palace walls to comply with royal tastes. Instead, they built pyramids from jumbles of smaller stones, which were then covered in stucco and painted with colorful murals. Most of those murals have long since faded away, exposing the stone foundations beneath. When Rosado-Ramirez took me to visit these ruins centuries later, Mayapán’s monumental architecture looked messier and less refined than its Classic period counterparts. But at the time, it would have been just as beautiful and imposing, albeit in a different way. Then, around 1400, drought struck again, and Mayapán, too, collapsed. According to surviving Maya texts, it was largely abandoned by around 1441. Its confederacy and council government had likely been a reaction to the inequality and failure of divine kingship, but the Maya of the Yucatán Peninsula were now learning that any kind of centralized power, even when it was shared, could succumb in the face of an environmental challenge that required the kind of flexibility and adaptation that only smaller communities were capable of. The people of post-apocalyptic Aké likely felt the stress of this new drought, but they didn’t have to leave their homes or remake their lives. They were already living in the safest, most adaptable way they knew. But their past was anything but lost or forgotten. People rebuilt and reimagined their communities with the help of everything their ancestors left behind, while also, perhaps, vowing not to repeat their mistakes. We don’t yet have the kind of perspective on our own time that archaeologists have on the Classic Maya collapse. They can see the whole story, while we are still in the midst of ours. But the way Maya people reinvented themselves and their societies gives me hope. They carried forward what served them and left behind what didn’t, especially the concept of divine kingship and its resulting inequality. What if we learned to see the Classic Maya collapse not as a terrifying story of a mysterious people who vanished in the face of environmental catastrophe, but as fertile ground for an exciting and necessary transformation that changed how its people saw themselves forever? Aké, Mayapán and hundreds of other Postclassic Maya communities are not cautionary tales. They are much-needed examples of adaptation and reinvention—not despite the apocalypse, but because of it. Source of the article

Phantasia

Imagination is a powerful tool, a sixth sense, a weapon. We must be careful how we use it, in life as on stage or screen When I performed in the play The Iceman Cometh (1946) by Eugene O’Neill, I played a character who stands up near the end and pours his heart out on stage. My character was almost like a messenger in a Greek tragedy but, instead of describing some nightmarish battle, he had to recount the horror of his own failures and the regrets of his life. It was an intense, emotionally draining experience, and I had to do it night after night. Each night I wondered if I could do it again, but somehow the energy of the room, the other actors and the story itself helped me to dial in some deep emotional frequency from my own history. It feels like you’re a shaman because you kind of lose yourself and channel something. And that activates deep emotions in the audience, too. So there’s a weird connection – I’m losing myself, and the audience is losing themselves. Then we come down together, having shared something powerful. – Paul Giamatti Like other artists, the actor is a kind of shaman. If the audience is lucky, we go with this emotional magician to other worlds and other versions of ourselves. Our enchantment or immersion into another world is not just theoretical, but sensory and emotional. How do actor and audience achieve this shared mysterious transportation? This shared ritual draws upon a kind of sixth sense, the imagination. The actor’s imagination has gone into emotional territories of intense feeling before us. Now they guide us like a psychopomp into those emotional territories by recreating them in front of us. Aristotle called this imaginative power phantasia. We might mistakenly think that phantasia is just for artists and entertainers, a rare and special talent, but it’s actually a cognitive faculty that functions in all human beings. The actor might guide us, but it’s our own imagination that enables us to immerse fully into the story. If we activate our power of phantasia, we voluntarily summon up the real emotions we see on stage: fear, anxiety, rage, love and more. In waking life, we see this voluntary phantasia at work but, for many of us, the richest experience of phantasia comes in sleep, when the involuntary imagination awakes in the form of dreams. In The Descent of Man (1871), Charles Darwin writes: The Imagination is one of the highest prerogatives of man. By this faculty he unites, independently of the will, former images and ideas, and thus creates brilliant and novel results … Dreaming gives us the best notion of this power; as Jean Paul [Richter] … says, ‘The dream is an involuntary art of poetry.’ The dreaming brain isn’t aware that the monster chasing us is unreal. During REM sleep, your body is turned off by the temporary paralysis of sleep atonia, but your limbic brain is running hot. In waking life, the limbic system is responsible for many of the basic mammalian survival aspects of our existence: emotions, attention and focus, and is deeply involved in the fight-or-flight response to danger. The dreaming brain isn’t just faking a battle but actually fighting one in our neuroendocrine axis. That’s why we sometimes wake up sweating with our heart racing. The involuntary imagination of dreaming creates an episode of emotional reality – not sham emotions. The same is true in the theatre, the movie, the novel. We’re really stirred by the St Crispin’s Day speech in William Shakespeare’s Henry V, really terrified by Edgar Allan Poe’s short story The Pit and the Pendulum (1842), and really haunted by Andrei Tarkovsky’s film Stalker (1979). The intensity of these emotions is not just felt by the audience. For an actor, embodying a scene with another actor – who reveals, say, a deep vulnerability from losing a child – can mean that a scripted fiction enacted by two strangers on a stage actually bonds the actors themselves in real intimacy long after the play or film is over. Like in a dream, the limbic mind experiences art as real. An actor or writer embodies the deepest traumas and joys of life so the audience can experience them vicariously. Acting (and other collective artistic work) can be a kind of mainlining of intimacy, and the audience partakes of this intimacy too. There’s a lot of subtle embodied communication going on. There’s an intense awareness between the actors themselves, and between the audience and actors – especially in theatre. The most obvious feedback happens in comedies of course, because you can hear the laughs or the lack of them. But much more subtle stuff is happening too. Once, when I was playing Hamlet, there was an early scene with the Player King. His prop beard was slowly falling off his face – unbeknown to him­ – just as I was saying a line about beards. And there was this amazing energy in the whole place from the collective recognition that we were all playing in a play, but also a play that knows it’s a play. And sometimes when something goes wrong on stage – like a mistake, or a prop thing – it actually brings in a fresh energy by breaking the normal patterns, and everyone becomes more present in the room. At other times, the emotional awareness is more intimate. Once I was playing the husband opposite the actor Kathryn Hahn in a scene where another character is inadvertently saying something insulting to her, and she doesn’t know what to say in response, and I’m trying to sort of cover it over, and then we just share this quiet moment together as we listen to the other character continue talking. They shot the scene many times, but then after one particular take we both looked at each other and said: ‘Wow, I really felt that one.’ And I think the authenticity of these kinds of connections can shine through to observers. For example, I think that was the take the director eventually used as well. To prepare a role, the actor must function as an empathy sponge: they work to ingest and ingurgitate all the social nuances of power, vulnerability, hope and despair. This is a sensory osmosis – the actor must cultivate this like a sixth-sense organ. It happens ‘in the dark’ of the mind so to speak, beneath the radar of conscious thinking. Nor does this rely on direct observations of human behaviour alone. According to the extended mind theory, humans offload much of who they are into the environment. The philosophers David Chalmers and Andy Clark argue that our minds don’t reside exclusively in our brains or bodies, but extend out into the physical environment (in diaries, maps, calculators and now smartphones, etc). Consequently, you can learn a great deal about someone by spending time in their home – not deducing facts like Sherlock Holmes, but absorbing subtle understandings of character, taste, temperament and life history. When an actor prepares to play a historical figure, he might find deep insights in the extended mind – the written record, the physical environs, the clothing and so on. A small detail can turn the key and open up a real ‘visitation’ from the past. When I played President John Adams in the 2008 miniseries for HBO, I studied many historical records, but the key that helped me find his character was an amazing compilation of his health complaints. Someone had culled all his letters for any references to his health, and produced this giant record of elaborate and hypochondriacal health complaints. The man was a wreck with digestive problems, toothaches, headaches, bowel troubles and more. After manic periods of high energy, he would ‘take to his bed’ for a couple of weeks. In reading all this, I began to see how to play the everyday John Adams. This capacity to get inside the emotional landscape of another person draws on a deep, evolutionary cognitive ability, a way of absorbing or reading what the psychologist James J Gibson called ‘affordances’. Gibson’s affordances can be understood as all the things that surround an organism in their environment, with potential to be understood, grasped and exploited. An affordance is relational: it depends on the ecological relationship between the animal and its lifeworld, rather than having an objective value. A freshly baked baguette is to a baker a proud symbol of her art; to the hungry child, it’s a meal; to the assistant at the boulangerie, an object to be arranged in the window. An affordance has meaning depending where you stand, and much of our grasp of affordances runs beneath conscious analysis. For social mammals, including humans, many of the affordances in our environment are social in nature, and thus we spend a huge amount of perceptual energy in processing signals of behaviour, demeanour and emotion from our fellows, much of which never surfaces to our conscious mind. For Plato, the imagination produces only illusion. For Aristotle, it’s a necessary ingredient to knowledge A chimpanzee, for example, sees the posture of the new guy as dominant – the dominance and subordinance exist in the real-time relationship between the two animals’ bodies and behaviours. The chimp doesn’t need to reason about the relationship, because the perception itself contains a great deal of information and prediction about status, disposition, character and possible behaviours. Stage actors ‘read the room’ in a similar way to our primate cousins reading their social world of dominance. A lifetime of subconsciously reading rooms (reading people) gives artists a rich palette of insights, feelings and behaviours. Unlike other animals, humans use phantasia to expand these affordances and create alternative behaviours – alternative realities – in the real-time present, as well as in the future. We take social affordances from our existing lifeworlds and spin new worlds out of them. That is the power of phantasia, but also, as we will see, its danger. Some people think that the imagination is just a frivolous fantasy-making ability. For Plato, the imagination produces only illusion, which distracts from reality, itself apprehended by reason. The artist is concerned with producing images, which are merely shadows, reflecting, like a mirror, the surface of things, while Truth lies beyond the sensory world. In the Republic, Plato places imagery and art low on the ladder of knowledge and metaphysics, although ironically he tells us this in an imaginary allegory of the cave story. By contrast, Aristotle saw imagination as a necessary ingredient to knowledge. Memory is a repository of images and events, but imagination (phantasia) calls up, unites and combines those memories into tools for judgment and decision-making. Imagination constructs alternative scenarios from the raw data of empirical senses, and then our rational faculties can evaluate them and use them to make moral choices, or predict social behaviours, or even build better scientific theories. For Aristotle, phantasia (which comes from the Greek word for ‘light’), is as important to knowledge as light is to seeing. Although Aristotle was careful to distinguish phantasia from the ordinary five senses, because it can occur without any stimulus from outside, we could understand phantasia as a kind of sixth sense, shared by humans and many animals, a way to know the world, to which humans return in dreams. Here, Aristotle is thinking of imagination as something like the involuntary process; the associational mashups of dreams, the subconscious tracking of affordances, the conditioned memories we use to evaluate and make sense of our experience. When we bring this process under executive control – that is, when we harness it to our waking, speculative and creative mind – we transform the involuntary imagination to voluntary, and this ‘phantasia 2.0’ is unique to humans. Perhaps a chimpanzee might dream of a hippo it once saw, but only a Walt Disney can bring the hippo to mind whenever he wants, dress it in a tutu in his mind’s eye, draw it, animate it dancing, and release it as a film called Fantasia (1940). Contemporary science of the mind sides with Aristotle, not Plato. Phantasia is adaptive and helps us know others and ourselves better. Art is not just great for therapeutic emotional management and catharsis, but also produces knowledge, generating new ways of understanding and manipulating the world. Contemporary neurocognitive theory argues that the mind is a ‘prediction processor’. It builds mental models of the world, and tests predictions, always updating the model to reduce future errors. These cognitive processes are not possible without the imaginative faculty. The imagination helps us create possible futures (new architecture, medical breakthroughs, new political possibilities) but also helps us model other minds. When art is good – when the acting and the script are on point, or a character in a novel is nuanced – the audience actually learns more about human behaviour than real-life observation provides. This is because the interior of the character is articulated in art, whereas it remains submerged in real social interaction. We are, then, running a constant ‘simulator’ in our own minds, whether we’re consciously aware of it or not. Because of this involuntary sixth sense, we seem to know things without having figured them out. The dark processing (reading affordances, absorbing impressions from the extended minds around us, involuntarily combining narratives in headspace, and just simulating things) serves up ‘reality’ to us without revealing its hand in the construction. The mind is always incubating an alternative or supplemental reality. Our experience is always imagination-laden. Yet the vivid, and often unconscious, nature of this cognitive process isn’t always enriching. If imagination is an involuntary creative act of cognition before downstream rationality uses it, it can also be dangerous. Without properly understanding imagination’s role in cognition, our views can present themselves to us as straightforward, accurate assessments of the world. People who disagree with us seem just ‘irrational’ (bad at weighing evidence and logic) or crazy. But once we take account of the imaginative layer of mind (the filtering and modelling we do between the raw data and the reasoned conclusions or beliefs), we see that the world itself really is different for the atheist as opposed to the Christian; the Republican as opposed to the Democrat; the rationalist versus the QAnon devotee. The legal scholars Cass R Sunstein and Adrian Vermeule argue that conspiracy theories arise when people suffer from a ‘crippled’ knowledge base because they have ‘limited’ informational sources. If you watch only one news network, or get your ‘facts’ from a crank website or radio show with no peer review, then you’re going to be highly susceptible to conspiracy and this will likely be exacerbated if you received limited school instruction in logic and critical thinking in your formal education. Thus, the answer to conspiracy theories is more education and more rational weighing of information sources. Conspiracy theories aren’t, however, just the result of alternative ‘information sources’ or limited information – we’re all awash in information. Rather, a potent conspiracy is a narrative arc in which the believer is a heroic character. Phantasia is a potent ingredient here. The persuasiveness of imagination consists in its embodied quality – the conspiratorial mind feels and sees itself as a protagonist in a drama. A dramatic story such as the QAnon theory is reinforced by a charismatic leader (politician/actor/clergy/celebrity), creating a phantasia layer that feels real, just as the dream feels real to the limbic system and the movie feels real to the audience member. No wonder then that conspiracy theorists like to dress up. The conspiracy-minded Trump supporters who smashed into the Capitol Building in Washington, DC in January 2021 included half-naked ‘Ur-Americans’ with painted faces and buffalo headdresses, carrying signs that said ‘Q Sent Me’. A charismatic leader is like the shaman/actor on stage. They have ‘gone before’ into the embodied belief, they evoke the emotions, they involve the watcher/audience so intensely that everybody gets deeply invested. The insurrectionists in their dress-up costumes at the Capitol are less like actors and more like fully immersed audience members­. The insurrection was a kind of malevolent cosplay convention in which superfans who had intensely internalised the narratives themselves took over the stage, only the ‘convention’ in this case was at the Capitol. Obviously, this makes them no less dangerous, because their guns are not props, and mob violence is wildly contagious. Can we turn that awesome power of imagination toward humanising ourselves and others? Our phantasia is not just ‘in our heads’ but actually extended and distributed into our environment. Just as the actor changes into costume and transforms into a new persona, so too the jingoist drapes himself in flags and paraphernalia becoming a new persona – one that feels righteous and empowered, in this case, to do violence. There is ‘magic’ in the accoutrement. Anthropologists and social psychologists have long recognised the unique dynamics in ritual adornment and behaviour. Ritualised collective imaginings help to produce what the French sociologist Émile Durkheim in 1912 called ‘collective effervescence’ – a feeling state or force that excites individuals and unifies them into a group. It’s a similar phenomenon in political crowds, religious ceremonies, music concerts and theatre experiences. In our current climate of partisan paranoia, we’ve all ramped up imaginative demonisation of the other. This leaves us vulnerable to dark imaginings. The Chinese American philosophical geographer Yi-Fu Tuan states it plainly in his book Landscapes of Fear (2013): ‘If we had less imagination, we would feel more secure.’ Yes, there are real threats and enemies out there, but not as many as our active imagination produces. Alas, we can’t stop fantasticating if it’s the root of human cognition, and we wouldn’t want to give it up if we could. But can we turn that awesome power of imagination toward humanising ourselves and others? Imagination recruits our natural empathy system and can amplify it. We see fear or joy in another person’s face, and we catch it like an emotional contagion. The actor has made a career of this natural human ability to recreate another’s feelings and perspectives within one’s self. Properly cultivated, this emotional mimicry can become ethical care, and art and artists play a crucial role in this cultivation. I have played some sinister characters doing some ethically dubious things in dark storylines. I’m not someone who thinks art must be ‘moral’ per se. A lot of art with really overt moral pretensions is usually pretty bad art. Having said that, we could be making better use of the imagination, making genuinely smart and nuanced characters. A lot of contemporary entertainment seems to me to have lazy renderings of characters. There’s a kind of shorthand going on: a character beats up someone in one scene, then kisses his mom in the next to show complexity and ambiguity, but it all feels too simple and easy sometimes. There’s a lot of contempt and cynicism in contemporary entertainment. The characters are contemptuous and cynical, and the impulses creating the characters are too. And there’s contempt for the audience: just give them crud. That’s always been a problem; I sound like an old-man moral scold. I’m all for the occasional mindless, nihilistic narrative, but the imagination is a hugely powerful tool and therefore weapon: if you’re gonna go morally dark or ambiguous, you’re gonna lacerate people, you better know why you are. You better be damn good at what you do, like Herman Melville good. It’s oddly easy to crank out something risky and edgy, we all think we know what that is, but most of it doesn’t really risk anything important, make real critiques of injustice or power. For sure: there’s really good stuff out there. But a lot of it’s weak, masquerading, performing its importance. It’s really difficult to be ‘true’ as in ‘authentic’. Believe me, I know, I’m shooting for it myself and frequently missing the mark. It’s difficult to show how real friendships form or end, how real grief is processed, real horror and pain are inflicted and borne, so on. You gotta be careful with the imagination. It matters how it’s wielded. There’s a lot of opportunity for critique, but hope too. Acting is like a ‘laboratory of identity’ because the actor gets to try on many different selves. Some of them are sinister and some saintly, with all points in between. The movie industry and the arts generally are also large-scale laboratories of identity for audiences. Such power carries some responsibility. But all of us have this power of phantasia – in fact we can’t escape it – so it’s on all of us to be better actors and even directors of our stories, individual and shared. Source of the article

Can Venice’s Iconic Crab Dish Survive Climate Change?

For more than 300 years, Italians have fried soft-shell green crabs, called moeche. But the culinary tradition is under threat Domenico Rossi, a fisherman from Torcello, an island near Venice, was 6 years old when he first went fishing with his dad. “I loved everything about it,” he says. “The long days out on the water, the variety of fish, even the rough winds that would sometimes capsize our boat.” Rossi vividly remembers picking up nets full of eels, cuttlefish, prawns, crabs, gobies and soles. But that rich biodiversity is now a distant memory. In the past 30 years, the population of many species native to Venice’s lagoon, a fragile ecosystem of brackish waters and sandy inlets, has shrunk. “At least 80 percent of species have gone,” Rossi says. The 55-year-old fishermen is one of the last trained to catch local soft-shell crabs. Scientifically named Carcinus aestuarii, the green crab is the key ingredient of a beloved local dish called moeche (pronounced “moh-eh-keh”), a word that means “soft” in Venetian dialect. Dipped in eggs, dredged with flour and fried, these crabs are usually served with a splash of lemon and paired with a glass of local white wine. The origin of this dish goes back to at least the 18th century—it was mentioned in the 1792 volume on Adriatic fauna by Italian abbot and naturalist Giuseppe Olivi. As Olivi described, moeche are only found twice per year, during spring and fall, when changes in water temperatures trigger crabs to molt. Until ten years ago, it was common to find fried moeche in osterias and bacari, or informal wine bars, across Venice’s lagoon, from Chioggia in the south to Burano in the north. Recently though, it has been increasingly hard to find them. Fishermen report a 50 percent decline in catch just in the past three years. As climate change, pollution and invasive species put pressure on local species, fishermen, chefs and locals may need to rethink their centuries-old food traditions. A fragile ecosystem Spanning 212 square miles, from the River Sile in the north to the River Brenta in the south, Venice’s lagoon is the largest wetland in the Mediterranean. Only 8 percent of the lagoon is made up of islands, including Venice, while the remaining surface is a mosaic of salt marshes, seagrass wetlands, mudflats and eutrophic lakes. These diverse habitats, characterized by various degrees of salinity and acidity, have historically been home to a rich variety of species. But in the past three decades, the impact of pollution from nearby industries, erosion due to motorboat traffic and warming waters have put pressure on the lagoon’s fragile ecosystem. This period coincided with the installation of MOSE, a system of movable floodgates designed to temporarily seal the lagoon from the Adriatic Sea to protect inhabited areas from sea-level rise. While essential to Venice’s survival, MOSE now prevents high-tide waters from reaching the innermost parts of the lagoon, preventing the influx of oxygen and nutrients that come with seawater and halting the formation of sandbars and salt marshes. As a result of these changes, many habitats have degraded and some native species have been hard hit. The green crab is found in many parts of the Mediterranean, including Italy, France, Spain and Tunisia. But it is only in Venice’s lagoon, in places like Chioggia, Burano or Torcello, that fishermen have developed a special technique to capture this crustacean during its molting phase. Like all crustaceans, green crabs molt while growing. During molting, they shed their outer shell, leaving behind an edible internal soft-shell. Fishermen in Venice’s lagoon have learned how to identify and catch molting crabs. “You need to learn to spot the signs on crabs’ shells to know if they are about to molt,” Rossi explains. “It takes years of just watching how your elders do it, and eventually you learn.” Crabs are typically caught 20 days before the start of the molting process. Once caught, crabs are placed in cube-shaped nets along the shores of canals. Fishermen, or moecanti as they are called locally, check them up to twice a day to spot signs of impending molting. About two days before their shell-shedding process, they are placed in another container. “Once there, you have to check them more frequently to pick them up right when they shed their shell and they are soft,” Rossi says. As crabs get closer to molting, they become weaker, and they can fall prey to younger, stronger crabs. A key part of a moecanti’s job is to constantly check the catch to prevent this sort of cannibalism, Rossi explains. “You have to pick out the weak ones and separate them from the rest,” he says. “It takes decades just to be able to tell where crabs are in their maturation process.” After molting, soft-shell crabs are usually sold and cooked within two days. When Rossi was a child, soft-shell crabs were abundant and considered part of Venice’s affordable rural foods known as cucina povera. But today’s scarcity has turned what was once an inexpensive fishermen’s food into a highly sought-after delicacy. Just six years ago, moeche sold for €60 per kilogram. The price of one kilogram of moeche can now reach €150, Rossi explains. Green crab goes out, blue crab comes in It’s hard to find accurate data on the green crab population of Venice’s lagoon. Scientists mostly rely on data from fishermen. “Based on fishermen’s catch, we can say that there has been an overall decrease of green crab in the past 50 years,” says Alberto Barausse, an ecologist at the University of Padua who has studied the impact of heatwaves on green crabs in the Venice lagoon using data from fishermen’s catch since 1945. Reasons for the decrease of green crabs are complex, Barausse explains. As detailed in his 2013 study, heatwaves can stress green crabs during their early embryo stage, making them less resilient to future threats. Changing rain patterns, with less constant rain but more frequent extreme precipitation, are changing the lagoon’s salinity levels, with a cascade of effects on its ecosystem. For example, higher salinity and warmer temperatures have incentivized the arrival of Mnemiopsis leidyi, a gelatinous marine invertebrate that eats mostly zooplankton, including the larvae of the green crab. Warmer waters have also contributed to the arrival of another highly invasive species, the blue crab. A native species of the Atlantic Ocean, the blue crab was first spotted in Venice’s lagoon around 1950. It is only in recent years that it found conditions suitable to fully expand its presence there. “Up until a few years back, water temperatures during winter were too cold for blue crabs,” says Fabio Pranovi, an ecologist at Ca’ Foscari University in Venice. “But thanks to warming waters, blue crabs now live and reproduce in the lagoon throughout the winter.” Since 2023, the blue crab population in Venice lagoon has exploded. From an ecological standpoint, blue crabs are considered an invasive species, Pranovi explains, because they compete with native species like the green crab for shelter and food. They don’t yet have a significant predator, so they are growing at a much faster rate than native species. As explained by Filippo Piccardi, a postdoctoral student in marine biology at the University of Padua who wrote a thesis on the impact of the species in Venice’s lagoon, blue crabs are omnivorous predators who have found their ideal prey among many of the lagoon’s keystone species, such as clams and mussels. In 2024, the impact of blue crabs on local clams was so acute that local authorities declared a state of emergency. For fishermen, these blue invaders are an enemy to battle with daily. “I can’t count the times I had to replace my nets in the past two years,” Rossi says. Traditional moeche fishermen like Rossi still make their fishing nets by hand. Each family has its own way of doing it, almost like a secret recipe, he explains. Because these handmade nets are used to catch green crabs, which measure around 4 inches across, they are close-knit with small holes. Blue crabs, which measure up to 9 inches, have much larger claws than green crabs, so they easily break net threads. “They are wickedly smart,” say Eros Grego, a moeche fisherman from Chioggia. “They come, break our nets and just wait there to feast on whatever was in the net.” Damage from blue crab has been so significant that Rossi is considering replacing his nylon nets with iron cages. “It costs me about €20 to make a kilo of net,” he says. “If I have to replace them every season, it’s going to cost me a fortune.” Blue crabs also eat green crabs, Pranovi says, and, according to Rossi, they have been feasting on their smaller local cousins with gusto thanks to their size and speed. “When you see them underwater, it’s just striking,” Rossi says. “Local crabs are so much smaller and can only move on the seabed, while these crabs are twice their size and can swim really fast across the water.” In 2025, Rossi has not caught any green crabs that would be suitable for moeche. “It’s the first year that I find zero moeche,” he says. “All I find in my nets is blue crabs and some date mussels.” Grego, who works in the deeper southern lagoon, is having a similar experience. “We were already dealing with shrinking catch due to heatwaves and extreme rainfall,” he says, adding that changes in climate patterns had made the traditional molting season less predictable. “The blue crab is the straw that broke the camel’s back.” Changing traditions? The arrival of blue crabs in Venice lagoon and the simultaneous decrease of the native green crabs are pushing some chefs to rethink traditional cuisine. Venissa, a one-Michelin-starred and green-Michelin-starred restaurant on the island of Mazzorbo, in the north of the lagoon near Torcello, has decided to no longer serve green crab. “Our philosophy is to cook dishes that don’t undermine the lagoon’s ecosystem,” says chef Francesco Brutto, who has been running Venissa with his partner, Chiara Pavan, since 2015. The couple embraced this style of low-impact cooking after noticing how Venice’s lagoon changed during the Covid-19 pandemic, when pressure from human activities like tourism was eased. “We spotted species we had not seen in years, like turtles and dolphins,” Brutto says. “So we decided to have as little impact as possible.” For that reason, Venissa mostly serves plant protein, Brutto explains. Animal protein is used only from species that are not threatened. That means invasive species like veined rapa whelk and blue crab are now fixtures of Venissa’s menu. “Right now, eating green crab is the equivalent of eating an endangered dolphin,” Brutto explains. Venissa still offers moeche, the chef clarifies, but they make it with blue crab. “Moeche of blue crab taste better in my opinion. There is more pulp compared with green crab,” he says. But not everyone is ready to give up traditional moeche. Ristorante Garibaldi, a traditional fish restaurant in Chioggia, has been serving moeche since it opened in the 1980s. “Our clients come here specifically to eat moeche,” says chef Nelson Nemedello. This year, Nemedello could only find about 800 grams of moeche from a local fisherman. “Prices are becoming insane. I paid them €170 per kilo,” he says. But demand is there, despite the price, so Nemedello and his wife keep serving green crabs. “It’s considered a food unique to this place, so people are willing to pay more for it.” According to Fabio Parasecoli, author of Gastronativism: Food, Identity, Politics, sticking with traditional foods can be a way to cling to local identity during times of rapid political and economic change. Traditional foods have always been intertwined with people’s sense of identity, he says, but in the past 20 years there has been a stronger identification with food in many parts of Italy, partly as a backlash against globalization. “It’s a little bit like saying this food is who we are,” he says. “If you take this away from us, then who are we?” In the case of a place like Venice, tourists’ expectations of a specific type of local gastronomic identity also play a role. “If tourists come to Venice expecting to eat traditional food like moeche, then restaurants may feel like they have to offer that,” Parasecoli explains. Plus, as Pranovi notes, it takes time for people to adjust to new flavors. “Some people find moeche made of blue crabs too big while others say the taste is not as subtle,” he says. “It is going to take time for people to change their expectations around how moeche should taste.” Changes in species distribution have always shaped food traditions. Parasecoli cites the example of potatoes, a species native to the Americas that became a widespread ingredient in European cuisine after its arrival from the New World in the 16th century. But in Venice, the pace of change feels fast to many locals. “I grew up in the lagoon, and it’s always been slightly changing. But in the past seven to eight years, I hardly can recognize it,” Rossi says. “It feels like being on the moon.” This pace of change is leaving fishermen and local authorities to play catch-up. Since the blue crab invasion started in 2023, authorities have ordered the capturing and killing of blue crabs. But Piccardi, who studied the impact of the blue crab for his thesis, says trying to erase a fast-growing population that has found optimal environmental conditions is unrealistic. “Our advice is to focus on catching female crabs specifically in order to slow down reproduction,” he says. “And, ultimately, to learn to coexist with this new species.” Fishermen like Rossi and Grego are adapting. “In the past three years, I have mostly caught blue crab,” Rossi explains. “I might as well shift the focus of my fishing.” While open to the idea of catching blue crab, Rossi doubts that this shift can guarantee a living. “There isn’t really a market for blue crab. They sell for less than €10 per kilo.” Tunisia, which is also dealing with massive uptakes in blue crabs, has developed a blue crab industry and established canning factories, Rossi notes. “If we did the same here, perhaps there would be some more opportunities.” Future prospects While fishermen are skeptical that their centuries-old livelihood can bounce back—Rossi nudged his son to find another career—scientists are careful to make any definitive predictions. “Things are still evolving,” Pranovi says. “When new species arrive, it takes time for ecosystems to adjust.” Green crabs may learn to cope with pressure from heatwaves thanks to oxygen released by salt marshes, Barausse says. But rising water temperatures, extreme weather events and the more frequent use of MOSE are all likely to destabilize local species, according to Pranovi. With such dynamics at play, the only way for Venice’s iconic crab dish to survive may be to change its core ingredient. This may become a familiar tale in other parts of the world. “As climate change keeps undermining the habitats of traditional species, the tension between preserving tradition and adapting with new foods will become more and more common,” Parasecoli says. Ironically, the very places where the blue crabs came from—such as the Atlantic coast of North America—now deal with an invasion of their own: European green crabs. What’s the solution? Eat them. Source of the article

GOATReads: Psychology

What Makes Some Dreams Impossible to Forget?

Dream carry-over effects can be invitations to dialogue with the unconscious. An often overlooked finding of modern dream research is that dreams are generally forgotten. The human brain cycles through four or five phases of rapid eye movement (REM) sleep during an average night’s slumber, and if REM sleep is a reliable trigger of dreaming, that means everyone is forgetting nearly all the dreams that pass through their minds each night. Not remembering most of our dreams seems to be a normal, natural feature of psychological functioning. Why, then, do we remember any dreams at all? Part of the answer is that some dreams are simply impossible to forget. Setting aside personal interest, cultural influence, and other external factors, there seems to be an innate tendency within all people to experience highly intensified dreams that make a strong impact on waking awareness. Such dreams may be rare, and their impact may diminish over time, but they clearly demonstrate that some of the dreams that cross the memory threshold do so because of their vivid experiential qualities, what I and other researchers call carry-over effects. Varieties of Carry-Over Effects Carry-over effects are feelings, sensations, and bodily responses from dreaming that are still experienced even after awakening. It’s like a part of the dream world manages to seep into the waking world. Different kinds of dreams have different kinds of carry-over effects. For example, an intense nightmare of being chased by a frightening stranger can have the carry-over effects of awakening in a full-body sweat, muscles trembling, with increased respiration and heart rate. Alternatively, a dream of a pleasant romantic encounter can lead to carry-over effects of strong genital arousal, occasionally leading to climax. Vivid dreams of flying and falling can both generate extremely realistic carry-over effects involving visceral sensations of gravity. This variety of carry-over effects shows that dreaming is not just a complex mental process, but a complex bodily process, too. Many different physiological systems can be activated during REM sleep and dreaming, but instead of being directed outward, as they are in the waking state, these systems are directed inward, toward the creation of the imaginal world of the dream. Possible Meanings of Carry-Over Effects Perhaps carry-over effects are merely glitches of the sleeping brain, the accidental side-effects of a random surge of energy during REM sleep, like a cup that spills when filled with too much water. That is possible, but at least two other explanations suggest a more adaptive value for dreams with these highly memorable qualities. First is that the wide variety of mental and physical systems stimulated in these dreams is itself the point. In our usual waking lives, we draw upon and actualize a mere fraction of our human potentials. To prevent the atrophy of those unused abilities and to keep them in a condition of functional readiness, dreams create highly lifelike scenarios in which those latent capacities may be expressed, exercised, and developed. From an evolutionary perspective, this attribute of dreaming contributes to our adaptive flexibility and readiness to act effectively in survival-related situations we have never encountered in waking life. A simple analogy would be running a car engine for an hour a day during a cold winter. The car isn’t actually going anywhere, but running the engine now will make it possible to drive the car in the future when the weather conditions change. A more therapeutically-focused explanation for dreams with carry-over effects is that they represent special calls for attention from the unconscious. They are signals of psychological importance and invitations to a dialogue with your dreaming self. With some dreams, the invitations may shade more into demands—you will pay attention to this, you will not forget it. A helpful approach to the interpretation of dreams with carry-over effects starts with a focus on the emotional continuities between dreaming and waking. To discern the meanings of these dreams, a good question to ask is where else these same feelings can be found in current waking life, whether in a relationship or a work project or a health-related issue. Whatever the situation may be, the dream is doing everything possible to highlight its emotional importance and make it a priority for waking awareness. Carry-over effects pose an intriguing oneiric paradox: the dream is not real, but it has real effects on our bodies and emotions in the waking world. The scary monster chasing you isn’t real, but your beating heart and feelings of terror when you wake up are real. This paradox can quite naturally stimulate people’s curiosity about religious and spiritual questions regarding identity, perception, and the nature of reality. It seems the universal experience of highly memorable dreams with vivid carry-over effects, occurring in cultures all over the world and throughout history, has in this way played an impactful role not only in the individual lives of the dreamers but also in the broader growth of religious and spiritual systems of belief. Source of the article

GOATReads: Philosophy

Record everything!

Our memories are precious to us and constitute our sense of self. Why not enhance them by recording all of your life? Current technology allows for radical memory enhancement: smartphones can­ record (and transcribe) every conversation, and wearable cameras ­can capture hours of first-person audiovisual recording. We have excellent reason to record much more of our lives than we already do and thereby enhance our memory radically. The case is simple: our memory is immensely valuable to us, and we already record much of our lives using video and photography, messenger logs and voice messages. These records are valuable to us in significant part because they enhance our memory and thereby promote its value. Recording those parts of our lives that we do not yet record would possess the same kind of value. Properly appreciated, this gives us reason to record much more (and create so-called lifelogs): nearly all of our conversations, everyday life and, in general, as many experiences as feasible. But this thesis faces important concerns, including worries about technological feasibility. Creating these records should ideally function without additional effort: they should be frictionless like messenger logs or the fictional technology in the Black Mirror episode ‘The Entire History of You’ (2011). A lifetime of records would take a lifetime to revisit in real time (with long stretches of little intrinsic interest). But we could revisit parts by searching by timestamp or tags, and the content of records could be automatically analysed, and software could generate transcripts and best-of cuts. Audiologs, transcripts and lower-resolution footage wouldn’t create storage problems, either. Objections from privacy and adverse psychological effects appear more significant. I will address these objections below, and will end with a plea: try recording almost everything before you rule it out. Why is our memory so valuable to us? Beyond its obvious role for survival, let us focus on three key aspects: first, we take pleasure in remembering and reminiscing. Second, our memories help us understand ourselves, others and our place in the world. Third, our memories play a crucial role for personal identity: who we are as persons is determined by our memories. These constitute our selves, so you are literally made, in part, of your memories. Our memories are valuable because they help make us who we are as individuals. The exact role memory plays for personal identity is subject to a philosophical debate going back at least to John Locke in An Essay Concerning Human Understanding (1689), who discussed the idea that a person remembering their previous experiences is both necessary and sufficient for that person’s identity through time. Many versions of the idea that personal identity requires some kind of psychological continuity between a person at an earlier time and a later time have since been developed. Building on this rich tradition – represented more recently by Alasdair MacIntyre, Charles Taylor, Derek Parfit and others – Marya Schechtman in ‘The Narrative Self’ (2011) argues that our selves are constituted by an autobiographical narrative formed from memories of our past experiences. On Schechtman’s view, who we are is partly determined by our autobiographical narrative and the memories on which this narrative builds; see also Dorthe Berntsen and David C Rubin’s Understanding Autobiographical Memory (2012). Given such views, it seems that a richer and deeper memory can quite literally turn you into a richer and deeper person. Richer and deeper memories appear to enhance your individuality: a thin and shallow autobiographical narrative appears to lead to a less substantial self, whereas a rich, detailed and deep autobiographical narrative appears to lead to a more substantial self. Assuming the latter is more desirable, a richer and deeper autobiographical narrative and the acquisition of memories that constitute it are more desirable. Consider current memory enhancement practices: why do we keep chatlogs, take pictures or write diaries at all? Of course reasons are plentiful: journaling can serve reflection; picture-taking has an artistic component; habit and device presets may play a role, etc. But we clearly value our records in large part because they enhance our memory. Our memory is valuable and this value is promoted by the records that enhance it. Records enhance our memory and thereby promote the three kinds of value just identified: we enjoy reminiscing by looking at our pictures and videos, and we understand ourselves and others better by revisiting chatlogs, social media posts and journal entries (moreover, records can – for better or for worse – be shared directly with others). But our records also enhance our autobiographical memories and thus help determine who we are as persons, allowing us to have richer personalities and a more complex individuality. A radical way to support this idea comes from the extended mind hypothesis first put forward by Andy Clark and David Chalmers in 1998, according to which external devices and the data they store can literally be part of our mind. According to this hypothesis, we extend our minds by using parts of our environment that can function for us in the way that parts of our brain do. In this vein, Richard Heersmink argues in ‘Distributed Selves’ (2016) that external information can literally constitute (autobiographical) memory and thus help determine who we are as persons. But the extended mind thesis is disputed, and it can be questioned whether external records themselves could indeed be memories: unlike records, memories have an autonomous character (memories come to mind, records normally don’t), a sense of intimate ownership, cognitive and emotional integration, and encompass all kinds of experiences, including moods, thoughts and whole conscious episodes. Through technology such as mind-machine interfaces, it may one day become possible to integrate records with our cognition as we do with biological memories, but today we can rely on a less radical alternative: external information can fail to constitute autobiographical memory proper but nonetheless help to inform and enhance our diachronic selves, just as autobiographical memory does. What matters about autobiographical memory vis-à-vis determining our selves seems to be our ability to construct and recount pieces of autobiography (for example, when wondering who we are or were, and how we ended up where we are). External records can enhance this ability and tie it more closely to reality, even if they don’t count as memories proper. Indeed, external records can be far more reliable in supplementing autobiographical narratives than relying on biological memories that can often be checked only against themselves. These are systematically distorted when recalled, and the act of recalling changes them further. External memory prompts aren’t subject to this and could tether us more reliably to reality than biological, subjective memories can. Just as memory disorders can diminish our personalities in undesirable ways, therapeutic memory enhancements through audiovisual records can help to restore them; see Aiden R Doherty et al’s paper ‘Wearable Cameras in Health’ (2013) and J Adam Carter and Richard Heersmink’s paper ‘The Philosophy of Memory Technologies’ (2017). Under ordinary conditions too, it seems that external memory records can help healthy individuals to develop richer and deeper selves. So, memory enhancement through records is valuable because it helps create pleasurable experiences of reminiscence and increases our understanding of ourselves and others, but also because it literally turns us into richer and deeper individuals, either because records themselves are external memories that constitute richer autobiographical narratives, or because their memory-like nature supports the continued creation of such a narrative. Insofar as becoming richer and deeper individuals is desirable, memory enhancement through records is also desirable. So far, I have argued for the value of records that most of us already create daily, based on the value of the memory that they enhance. But, when properly appreciated, it seems that the reasons that motivate these memory-enhancement practices should motivate us to record a lot more. Consider conversations and other experiences we don’t normally record, such as a conversation with a friend: if every conversation generated a chatlog (automatically transcribed by a digital device) or took the form of letters, you might come to cherish these like you cherish your biological memory. Searchability of such records is key, but we know that current technology allows this already. There are so many conversations we could record but don’t. More seems to be more here. We already record much, but significant parts of our lives remain fleeting (so many conversations, unexpected events, and much of everyday life including periods seemingly without remarkable events). Most people already record some noteworthy events (audiovisually or through writing). But remembering and recording unexpected, spontaneous but notable, mundane or recurring events is often just as valuable in retrospect. And even where single events aren’t obviously worth recording, many of them together form a significant part of our experience and contribute to who we are, just as painful or otherwise negative experiences can. Thus, even the value of records that aren’t pleasurable seems evident since they let us remember and understand ourselves better, even if we rarely revisit them. Recording social interaction allows reminiscing about it more accurately, improving our understanding of what was said and our picture of ourselves and others. Consider the last worthwhile conversation you didn’t record – wouldn’t it be good to have such a record, just in case? And if you had such a record, wouldn’t you want to keep it? Granted, most current (audiovisual) records miss much of our experiences, including inner speech, conscious experiences and emotions. But I am neither arguing that extensive audiovisual records should replace biological memories or other techniques such as journaling, nor that current recording technology can capture everything worth remembering. We already record much, you might say, why then record even more? Recording more might be valuable, but should we record everything? There is a risk of a status quo bias here. It is, however, unlikely that we have chanced upon the sweet spot of recording just the right amount. Answering what this is presumably requires personal reflection and experimenting with the technology, to determine which records one values, and decide what kind of person one wants to be. Playful imagination can help assess alternatives to our present way of life. Have you ever wished for perfect recall? Lifelogs are almost like this, although voluntary, accurate and restricted to recordable sensory modalities. Our present recording practices indicate how much we value memory enhancement. Imagine all the pictures you have ever taken and every logged message were deleted – how would you feel and why? I would feel devastated, like having lost a part of me and a prized basis of understanding myself and others in my life. Likewise, we can imagine already possessing extensive records (of every conversation we ever had, say), then losing many of them to end up with what we in fact possess. I imagine this as a comparable loss. We can reflect on our relationship to our enhanced counterparts by thinking about people with memory disorders who use lifelogs for therapeutic purposes. Our present recording practice isn’t only more desirable than the situation of people with impaired memory, but also better than that of past people who lacked the ability to create written or audiovisual records. From the imagined perspective of people with access to a universal, friction-free lifelog, our situation would likely appear comparably less desirable. This vision from the future gives us reason to pursue more extensive recording: more will be more! Not only would we benefit from recording more, our family and future generations could too. Diaries and letters already allow glimpses into the lives of our ancestors. But imagine how much better we could understand them had they (say, your great-grandparents, or perhaps Ludwig Wittgenstein) recorded everything! As it is, we don’t possess a single audio recording of Wittgenstein’s voice. You might also want to allow posteriority access to deadbots: language models trained on records of the dead to simulate responses their originators would have given. These might become uncannily life-like if trained on sufficient data – whether that would be desirable remains an open question; for a discussion, see Tomasz Hollanek and Katarzyna Nowaczyk-Basińska’s paper ‘Griefbots, Deadbots, Postmortem Avatars’ (2024). For instance, some philosophers continue teaching chatbots to impersonate their predecessors. Finally, a highly speculative possibility: digital immortality. Could collecting comprehensive data about someone lead, one day, to a reconstruction of that person? This idea faces vexing questions about personal identity and the underlying causes of consciousness, not to mention the morality of undertaking such a reconstruction; for a discussion, see Dan Simmons’s novel Hyperion (1989) which features a ‘cybrid’ (a human-AI hybrid) with the personality of John Keats, as well as Paul Smart’s essay ‘Predicting Me’ (2021). Given the preceding argument, memory enhancement through ubiquitous recording possesses significant value that any counterargument has to overcome: merely raising problems does not suffice to rule it out, though – a point illustrated by Plato’s Phaedrus, in which Socrates laments the negative effect of writing on biological memory. Nevertheless, we must discuss two serious challenges that could significantly constrain what we may record. The first concerns privacy and data autonomy. People have a presumptive right to privacy and at least some control over what data about them is collected. In most cases, consent must be acquired. Sometimes this is straightforward. Many allow people they trust to record much, and might become more eager to consent once they appreciate the value of extensive recording. Still, many will not ever want to be recorded. The resulting gaps in our records could partially be filled by journaling but, as with non-recordable experiences, sometimes we’ll have only our biological memory to go back to. But both accidental and intentional leaks (such as in revenge porn) remain a threat exacerbated by extensive recording practices. Powerful bad actors are another. Tech companies and governments have interests in records that often conflict with those of the general public. When Siri’s co-creator Tom Gruber praises AI-assisted memory enhancement, we should be wary, and the prospect of a police state with access to data on everything we’ve ever done should make us think carefully before proceeding down this path. We could consider a less privacy-friendly argument. If records partially constitute ourselves, prohibiting those required for deeper personal narratives infringes on the very core of our being and forces us to remain shallower than we could be. We would not restrict people with biological super-memories or excessive journal writers, and there is no prohibition on turning oneself into such a person. Analogously, if recording technology can constitute someone’s self, sanctioning it may appear an objectionable infringement upon our ability to self-constitute. Conceivably, privacy concerns could require the suppression of natural memory, but they don’t. One might think memory enhancement should be treated likewise. Evidently, this argument must address the fact that external memories are easier to share and subject to less distortion than biological ones. Answering these challenges requires much more work but, given the value of extensive records, I believe that concerns about privacy and autonomy should be addressed through technological means (like open-source software, encryption, automatic acquisition of consent, data deletion if desired) and legal means (like robust privacy rights and regulation of bad actors). Given my positive argument above, powerful reasons exist to implement such safeguards. We should enable people to enhance their memories safely and responsibly. Another important challenge is that recording everything could conceivably have negative psychological effects. Knowing such records to be available, why would we bother to remember anything for ourselves? Through lack of use, our biological memory might well atrophy (the use of digital maps and navigation appears to be having this effect on our ability to navigate our environs unaided). Extensive records might cause us to live in the past, become less open to new experiences, less able to cope with loss; being constantly recorded could promote self-censorship. On the other hand, conceivable positive effects include higher accountability and demands on one’s own behaviour; recording everything by default might allow us to live in the moment more; instead of straining our social relationships, it could make us more understanding of each other. We shouldn’t rely on speculation here – plenty of which exists in both sci-fi and in research like Björn Lundgren’s paper ‘Against AI-improved Personal Memory’ (2021) – but current empirical results appear ambiguous and don’t assess widespread use of lifelogs. Negative effects presumably vary individually, and it hasn’t been shown that they outweigh the value of lifelogs. Even in light of the previous challenges, I believe that we have compelling reasons to at least experiment with recording almost everything. Philosophy and empirical research can go only so far in establishing a technology’s consequences and desirability. However compelling the arguments, it seems plausible that the decision to radically enhance one’s memory must involve an element of individual preference. So, what kind of person with what kind of (extended) memory and recording practice would you like to be? Arguments and contemplation can help you think this through, but ultimately you must try for yourself. Source of the article