Verify it's really you

Please re-enter your password to continue with this action.

Posts

GOATReads: Psychology

Screen Time and Mental Illness: Is More Always Worse?

Is more screen time always bad for adolescents? Parents are often concerned that too much screen time (e.g., time spent looking at social media apps on smartphones, monitors while gaming, or TV while watching series or movies) may negatively affect the mental health of their kids. There is indeed ample psychological research that supports this idea. For example, a recent integration of meta-analyses containing data from more than 1.9 million people showed a statistically significant association between increased time spent on social media and depression (Sanders and co-workers, 2024). But is it really that easy, or could there be other, previously unidentified, factors that are crucial to understand the relationship between screen time and mental health? A new study on time spent on social media, gaming, and TV, and mental health problems A new study published October 2nd, 2025, in the scientific journal Psychiatry Research analyzed data from more than 23,000 Norwegian adolescents aged between 14 and 16 years, focusing on screen time for social media, gaming, and TV and mental health problems (Frei and co-workers, 2025). The following mental health problems were considered: Substance abuse Schizophrenia bipolar disorder depression anxiety eating disorders hyperkinetic disorders pervasive developmental disorders Importantly, the scientists who conducted the study entitled “The phenotypic and genetic relationship between adolescent mental health and time spent on social media, gaming, and TV,” also analyzed genetic data from the people involved in the study. Genes are a factor that is typically not analyzed in studies on screen time and mental health, but they may be highly relevant, as many mental health disorders are strongly affected by genetic variations between people. What did the scientists find out? Overall, 3,829 participants had a psychiatric diagnosis, while the others did not. For all three screen time types (TV, gaming, and social media), there were clear associations with mental illness: 1. Adolescents, who watched TV three to four hours a day or more had a significantly higher chance of having a psychiatric diagnosis compared to adolescents who watched less TV. 2. For gaming, adolescents who spent the least amount of time gaming had a lower chance of having a psychiatric diagnosis than other adolescents. In contrast, adolescents who played video games for three to four hours a day or more had a significantly higher chance of having a psychiatric diagnosis compared to adolescents who spent less time gaming. 3. For social media use, adolescents who spent the most time on social media, but also those who spent the least time on social media, had a significantly higher chance of having a psychiatric diagnosis compared to other adolescents. In addition to formal psychiatric diagnoses, the scientists also considered self-reported symptoms by the participants in their analyses. It was found that spending three to four or more hours watching screens over a single day was associated with higher symptom severity scores. In the last analyses, the scientists used the genetic data collected from the participants to determine their individual genetic risks for various mental health disorders. Interestingly, the risk scores for depression, ADHD, autism spectrum disorder, and anorexia nervosa showed significant associations with screen time. This suggests that the genetic risk for these disorders could also have an impact on screen time. The scientists who used advanced statistical models to find out, to which the association between screen time and mental health disorders could be attributable to genetic effects. Takeaway: Consider genetics When scientists find an association between social media use and mental health problems, it is typically hard to tell the direction of the effect. On the one hand, too much social media use may lead to loneliness and losing friends in real life, and this could then lead to mental problems. On the other hand, someone with mental health problems may have issues having a rich social life and then spends more time on social media because they are bored or want to find helpful advice. The results of the study show that both ideas may be too simplistic and that, in fact, genetic liability for a disorder may also lead to liability for increased screen time. Moreover, the study found an interesting result in that adolescents with the lowest time on social media also had an increased risk for mental health issues. This may reflect that some adolescents with problems that affect social functioning, such as autism spectrum disorders, may have problems connecting with other people even on social media. An important finding that deserves more research! Key points • A new study investigated the association of screen time for social media, gaming and TV and mental health. • The scientists also considered a large number of genetic influences on psychiatric disorders. • Adolescents who spent 3 to 4 hours or more watching screens had a higher chance of having a mental disorder. • Genetic influences explained a considerable amount of the association between screen time and mental health. Source of the article

GOATReads: Philosophy

Revolutionary tolerance

In an age of ferocious religious bloodletting, Sebastian Castellio argued that everyone seems like a heretic to someone else Born in 1515, Sebastian Castellio lived in an age of execution. In terms of judicial killings in Europe, the period between 1500 and 1700 outstripped any era before or after. The new heresies of the Protestant Reformation prompted an initial burst of executions: approximately 5,000 people were put to death for their religious beliefs in the 16th century. This was followed by far deadlier witch hunts, which saw about 50,000 people legally exterminated for witchcraft. Most terrifying for ordinary Christians was the fact that the standards of orthodox thought and behaviour shifted like the sand. One minute, Henry VIII was killing Protestants; the next, Catholics. His daughter Mary earned her ‘bloody’ nickname by executing the very Protestants who had been in favour under her half-brother Edward VI. If we add to the death toll the millions of people who died in the Thirty Years’ War and other Wars of Religion, we see that heresy could be truly lethal. In the midst of this fear and uncertainty, Castellio, a professor of Greek, stepped back from the dogmatic clashes of the day. Rather than echo the calls for battle, he issued a plea for toleration. His great insight was that everyone was a heretic according to somebody else. Castellio questioned the idea of heresy itself. ‘After a careful investigation into the meaning of the term heretic,’ he wrote, ‘I can discover no more than this, that we regard those as heretics with whom we disagree.’ It was the religious equivalent of the notion introduced a few decades later by Michel de Montaigne that a barbarian is, in essence, someone not like me. To Castellio, being ‘someone with whom we disagree’ was entirely insufficient grounds for being sentenced to death. Today, we take these ideas for granted, but at the time they were revolutionary. In an era with no sense of separation between Church and state, heresy was equivalent to political sedition. Castellio disagreed. He was one of the first to argue that the only way the religiously divided states of Europe could survive was through toleration. Castellio’s abhorrence at heresy executions led him to question other traditional doctrines and practices within Christianity, including the Bible itself. His views, often developed in opposition to those of his nemesis, John Calvin, would start to grow a liberal branch of Christianity, one that challenged the dogmatic confessions of the churches that emerged from the Reformation and instead championed tolerance, free thinking and reason. Castellio’s ideas eventually won out in the West. But today his name is unknown. Castellio was an unlikely innovator. He was the son of a farmer, born in the village of Saint-Martin-du-Frêne in France, which at the time lay in the independent duchy of Savoy, about halfway between Lyon and Geneva. In a stunning example of social mobility for the time, he would rise from his peasant origins to become professor of Greek at the University of Basel in the Swiss Confederacy. Though we know little about his formative years, he started his ascent by receiving a humanist education in Lyon, where he demonstrated a gift for learning languages. As a boy, he stomped grapes on his father’s farm; by the age of 23, he was composing Greek verse. He also learned Latin, Hebrew, Italian and some German. In 1540, at the age of 25, fear of religious persecution drove the newly Protestant Castellio to leave France for the Holy Roman Empire. In Protestant Strasbourg, he met Calvin, who had been exiled from Geneva for too aggressively pushing his ideas for religious reform on the city. Their relationship was initially cordial, and Castellio even lived for a time in Calvin’s house. In 1541, a new group of leaders in Geneva invited Calvin back. He returned, and Castellio joined him as head of the Geneva school. But feelings hardened between the two men when Calvin criticised as inaccurate a French translation of the New Testament that Castellio had drafted. Their relationship soon devolved into open hostility when Calvin thwarted Castellio’s nomination to become one of the city’s pastors. Calvin refused to admit Castellio to the ranks of the pastors because Castellio deviated from Calvin’s views on the biblical book of the Song of Songs and on the interpretation of Christ’s ‘descent into hell’ as confessed in the Apostles’ Creed. It was Castellio’s first lesson in the dangers inherent in dogmatic orthodoxy, whether Protestant or Catholic, and it undoubtedly played a role in leading him to think differently. Catholic persecution had forced him to leave France, now Protestant intransigence drove him from Geneva. No longer welcome in Geneva, Castellio moved to Basel, where he found work as a poorly paid corrector for a printing press. Castellio used his position at the press to publish several works, including a new translation of the Bible in 1551, rendering it into elegant Latin, as if it had been written by someone like Cicero. His distinctly non-verbatim approach to the translation revealed that he placed more importance on the Bible’s overall message and meaning than on its individual words. Moreover, in his dedicatory letter to the young King Edward VI of England, Castellio revealed his opposition to religious persecution: ‘If someone disagrees with us on a single point of religion,’ he laments, ‘we condemn him and pursue him to the corners of the earth … We exercise cruelty with the sword, flame and water, and exterminate the destitute and defenceless.’ Castellio portrayed executions for heresy as not only cruel and inhumane but positively unchristian But it was an event in Geneva in 1553 that catalysed Castellio’s thinking. In October that year, the Spaniard Michael Servetus was burned alive for heresy. Servetus was a medical doctor, who was, incidentally, the first European to describe pulmonary circulation. But he also rejected the orthodox doctrine of the trinity – the belief that God is one deity in three co-eternal persons, Father, Son, and Holy Spirit – held by Catholics and Protestants alike. Although he was rejected by almost everyone, Servetus’s antitrinitarian ideas would later influence early Unitarian churches in Poland and eastern Europe. Servetus first published his views in 1531, and the condemnation of those ideas forced him to spend the next two decades underground, living under the assumed name Michel de Villeneuve. His identity was discovered after the publication of the Christian Restitution (1553), a book that revived his heretical views of the trinity. After slipping out of jail to escape the Catholic Inquisition in France, Servetus found his way to Protestant Geneva, where Calvin prompted the city officials to arrest him. The city council found Servetus guilty of heresy and sent him to the stake, sadistically using green wood to slow the flames and consequently increase his suffering. Calvin defended the death sentence in print, explaining ‘that heretics are rightly to be coerced by the sword.’ He argued that ‘when someone rips religion from its foundations, utters detestable blasphemies against God, leads souls to destruction with impious and pestilent dogmas, and openly defects from the one God and his pure doctrine, it is necessary to apply the ultimate remedy to prevent the deadly poison from spreading further.’ To support his contention, Calvin cited Deuteronomy 13: ‘If anyone secretly entices you …, saying, “Let us go serve other gods” …, you shall surely kill them.’ Most of Calvin’s fellow pastors and theologians at the time agreed. Philipp Melanchthon, Martin Luther’s successor in Wittenberg, praised Calvin for his role in the execution, telling him that the ‘church owes you and posterity will owe you a debt of gratitude. I completely agree with your judgment, and I affirm that your magistrates acted justly when they executed a blasphemer by a judicial process.’ Many of Calvin’s supporters cited Servetus’s ‘blasphemy’ rather than ‘heresy’, for some thought the term carried an even clearer biblical injunction: ‘Whoever blasphemes the name of the Lord shall surely be put to death.’ Castellio disagreed. Recently named professor of Greek at the University of Basel on the strength of his Bible translation and other works, Castellio, together with some friends, published a pseudonymous book entitled Concerning Heretics: Whether They Are to be Persecuted and How They Are to be Treated (1554). Under various assumed names, Castellio portrayed executions for heresy as not only cruel and inhumane but positively unchristian. The executions are bad enough by themselves, he writes: But a more capital offence is added when this conduct is justified under the robe of Christ and is defended as being in accord with his will, when Satan could not devise anything more repugnant to the nature and will of Christ! … What do you think Christ will do when he comes? Will he commend such things? Will he approve of them? Instead, Castellio declares: ‘I do not see how we can retain the name of Christian if we do not imitate His clemency and mercy.’ Here we get an early glimpse of the ‘modern Jesus’, the kind and merciful Prince of Peace. This was not a new notion of Castellio’s, but throughout the Middle Ages and Reformation period the image of a merciful Christ stood side by side with that of a vengeful saviour. Before the Crusades, Pope Urban II insisted that ‘Christ commanded’ the military adventure to destroy the ‘vile race’ of the Muslims. Books published by one of Calvin’s printers in Geneva all bore Jesus’ biblical quotation: ‘I have not come to bring peace but a sword.’ But the sword-wielding, vengeful Christ was foreign to Castellio’s conception. Instead, he saw the execution of Servetus as a profoundly un-Christ-like act, made worse by the fact that it was instigated by Calvin, who was supposed to be one of the lights of the Church. Castellio wrote a second book on toleration, Against Calvin’s Little Book (Contra Libellum Calvini), in which he countered point by point Calvin’s arguments defending the execution of Servetus. He repeatedly pointed out the brutality of executions for heresy in general and argued that it was bloodlust, cruelty and naked ambition for power that drove Calvin to push for Servetus’s execution. Most offensive to Castellio was Calvin’s argument that Servetus’s death was necessary to defend Christian doctrine. As Castellio put it in the best-known line from the treatise: ‘To kill a man is not to defend a doctrine, but to kill a man.’ The conflict over Servetus’s execution set Calvin and Castellio permanently at odds with one another. Calvin couldn’t stand Castellio’s willingness to overlook the threat to religion and public order posed by those who deviated from the true faith. Castellio couldn’t abide Calvin’s rigid dogmatism. For Castellio believed that, properly understood, Christianity is more about moral behaviour than doctrine. The Reformation itself had made that clear to him. After decades of wrangling over doctrine – both between Protestants and Catholics, and among Protestants themselves – it was clear that no sect was ever going to ‘win’. Castellio believed that it was far more important, therefore, to focus on moral living. ‘We dispute so much about eternal election, predestination, and the trinity,’ he said, ‘insisting on things we can never see and ignoring the things that are right in front of us. From this are born infinite disputes, which result in the bloodshed of the weak and the poor if they don’t agree with us.’ And what are ‘the things that are right in front of us’? The clear moral precepts of Christianity. ‘The precepts of piety are certain: to love God and your neighbour, to love your enemies, to be patient, merciful, and kind, and carry out other similar duties.’ Anyone of any religious sect can practise these things. The way to assess how well a group of Christians lived up to its duties was to judge them by the ‘fruits of the spirit’ listed by St Paul in Galatians 5, namely, ‘love, joy, peace, patience, kindness, generosity, faithfulness, gentleness, and self-control.’ ‘By such fruits,’ Castellio declared, ‘it is possible to judge which sects are the best, namely those which believe and obey Christ and imitate his life, whether they’re called Papist, Lutheran, Zwinglian, Anabaptist or anything else.’ Indeed, these denominational designations meant nothing to Castellio, who insisted that good Christians may be found in any of them, for ‘they all believe in the same God, in the same Lord and Saviour Christ.’ For Castellio, these were the core messages of the Bible: believing in God and in Jesus as the Saviour, and loving your neighbour as yourself. After years of fighting with Calvin over toleration, personal attacks and predestination – he utterly rejected Calvin’s idea that God had predestined both those who would be saved and those who would be condemned to eternity in hell – at the end of his life Castellio returned to focus on the Bible. His thoughts on biblical interpretation mark his most radical departure from tradition and most clearly anticipated later liberal Protestantism. For Castellio rejected the idea that the entire Bible was divinely inspired, and taught that one should use the tools of human reason to understand the human authors who produced it. Christians should not get hung up on exact words but consider the overall ‘tenor’ of the biblical text Castellio had seen what unquestioning trust in cherrypicked biblical quotations could lead to. Calvin had used it to defend killing heretics. It lay at the root of the interminable debates on how to interpret what happens to the bread and wine of holy communion. Each sect of the Reformation had its favourite verses to justify its own doctrines. Castellio recognised that flinging competing biblical quotations at each other was getting Christians nowhere. ‘Unless some other rule is discovered,’ he declared, ‘I see no way here of attaining concord.’ In an astonishing book for its time, The Art of Doubting and Believing, of Knowing and Not Knowing, Castellio suggested that Christians should not get hung up on exact words and phrases but should consider the overall message and ‘tenor’ of the biblical text. There are things in the Bible, he said, that seem contradictory or absurd. It’s also clear that the biblical authors sometimes made mistakes, and that ‘something may have escaped their memory or judgment.’ Moreover, St Paul himself tells us that he sometimes wrote things from his own judgment, without a command from the Lord. And so, Castellio argued: ‘I don’t see why we attribute to these authors more than they attributed to themselves.’ Rather than consider the entire Bible the inspired word of God, Castellio believed it could be divided into three kinds of texts: revelation/prophecy, knowledge, and instruction. Only the sections he describes as revelation or prophecy are to be understood as the actual word of God. All the other sections – which constitute a large majority of the Bible – are to be understood as the words of humans, which can be evaluated and interpreted in the same way as other ancient texts. And if most of the Bible is a human product, we may use human reason to understand it. Indeed, reason precedes and is more trustworthy than the actual words of Scripture: ‘Reason is, so to speak, the daughter of God … Reason, I say, is a sort of eternal word of God, much older and surer than letters and ceremonies … Reason is a sort of interior and eternal word of truth always speaking.’ Therefore, in any Christian controversy, the place to start is not with the Bible but with reason: ‘Our method will be the following: we start by treating according to reason alone the debated questions. Then we add the authorities taken from Scripture.’ Here we see the method of medieval scholastic theology turned upside down. Whereas the medieval scholastics were engaged with ‘faith seeking understanding’ – starting with the principles of the faith and then trying to understand them using reason – Castellio starts with reason and then adds biblical authorities in support. Castellio knew that he would be criticised for his ideas: ‘They will cry out that this is blasphemy. The Holy Scriptures, according to them, were written under the divine breath.’ But he realised someone had to deviate from tradition; the old ways were leading to nothing but division and persecution. ‘We must dare something new if we want to help humanity,’ he wrote. ‘We see that advances in the arts, as in other things, are made not by those content with the status quo, but by those who dare to alter and correct those things that have been found defective.’ There was one other key to moving forward, Castellio believed: keeping an open mind and learning how to doubt – hence the title of his book, The Art of Doubting. Even with his new rational approach to the Bible, some things would never be clear. There were many doctrines – for example, predestination, the trinity, and the nature of the Eucharist – over which Christians would always disagree. But that’s OK. The problem is not disagreement but closed minds. ‘A person whose mind is closed holds tenaciously to his opinion and prefers to give the lie to God himself and all the saints and angels if they are on the other side rather than to alter his opinion. Flee this vice,’ Castellio counsels, ‘as you would death itself.’ Death found Castellio before he finished writing The Art of Doubting. He died at the age of 48 in December 1563, five months before Calvin died. He left behind a wife and eight children, including a one-year-old and three others under 10. Calvin and his allies hounded him to the end; at the time of his death, Castellio was under investigation in Basel for allegations of heresy that originated with Calvin’s friend Theodore Beza. Castellio’s final book was not published in full until 1981, but we know it circulated in manuscript after his death. This long delay in publication is one reason for Castellio’s relative obscurity. Censorship and pressure from his enemies meant that several of his works – often the most interesting and radical ones – were not published until many years after his death. Even so, in certain circles, Castellio has long been appreciated. Michel de Montaigne called him an ‘outstanding’ scholar. John Locke referred to him as ‘so learned a man’ and owned several of his books. Voltaire assessed that he was ‘more learned’ than Calvin. The Enlightenment era saw the flourishing of several of Castellio’s ideas, most notably the primacy of reason. Enlightenment rationalism and advances in science cast deeply into doubt Moses’ authorship of Genesis, the literal truth of the Bible’s creation account and of early biblical history, and the divine inspiration of the Bible. Thomas Chubb, an English deist writing during the 18th century, found a manuscript of Castellio’s Art of Doubting and noted that the ‘excellent Castellio’ was ‘of the same opinion’ as himself regarding the divine inspiration of the Bible. In addition, Enlightenment thinkers downplayed Christian dogma in favour of the faith’s moral precepts. Unlike controversial doctrines, moral precepts such as those from the Ten Commandments not to kill, steal, commit adultery or bear false witness were agreed upon by all and seen as consonant with human reason. Thomas Jefferson rejected miracles and the Christian doctrines based on them but considered Jesus’ teachings ‘the most sublime and benevolent code of morals which has ever been offered to man.’ Taken literally, the Bible can be used to justify slavery and polygamy, among other horrors By 1835, on the 300th anniversary of the Reformation in Geneva, Castellio’s vision of Christianity had overtaken that of the city’s own far more famous reformer, John Calvin. As the historians Philip Benedict and Sarah Scholl note: ‘By the turn of the 19th century, a liberal, non-dogmatic faith had come to dominate the national church. The all-Protestant organising committee [for the anniversary] consequently spoke of “promoting the grand principles of tolerance which are those of true Protestantism”.’ A ‘liberal, non-dogmatic faith’ grounded in the ‘grand principles of tolerance’ was exactly what Castellio – but decidedly not Calvin – had preached 300 years before. We should not push the comparison between Castellio and liberal Protestantism too far, however. Castellio knew of few of the scientific advances available to his 19th-century successors. Like everyone else of his time, he was what we would call today a ‘young-Earth creationist’ and even joined in the popular game of trying to date Earth based on the biblical generations (his answer: 5,529 years, 6 months and 10 days). It is also difficult to trace Castellio’s direct influence on later generations of liberal Protestant theologians. It seems that few of them arrived at their ideas by reading Castellio. But his ideas had prompted an initial reconsideration of long-held religious truths and lingered just below the surface for centuries. More than any direct influence he had on later generations, Castellio’s enduring significance is that he was one of the first to appreciate some of the profound problems caused by the traditional understandings of Christianity, the Bible and, especially, heresy. Taken literally, the Bible can be used – and, in fact, has been used – to justify the practices of slavery and polygamy, the repression of women, the execution of heretics and witches, and the persecution of Jews, among other horrors. The faith can easily be weaponised for political purposes and the exclusion of those who ‘aren’t like us’ or who ‘don’t think like us’. Castellio’s Christianity, by contrast, was short on doctrine and long on openness and morality. That was the message of the Bible for him. Any reading of the Bible or understanding of the faith that did not produce these, he believed, failed to convey the true message of Christianity. Source of the article

The Labor of Play

An Upper East Sider with advanced degrees playing Wordle over espresso; a suburban teenager pairing Call of Duty: Warzone with bong rips before work: both play, arguably, for the same reasons. Each delights in low-stakes release. Each enjoys the sense of completing a task (spelling words, killing enemies) that feels vaguely moral, and which the player might take pride in, even if virtually and for an audience composed only of themselves. What is different is only that the New York Times and other enlightened organs have now found ways to market trivial, dependence-inducing digital game products (already wildly lucrative in other settings) to the gentry. And the Times is hardly alone. Is this boom in bourgeois gaming bad? The “inanity of many leisure activities” famously troubled the philosopher Theodor Adorno. An ardent anti-capitalist—and a half-Jewish German refugee who feared that frivolous entertainment was a handmaiden to fascism (living in Los Angeles must have been interesting!)—Adorno had no problem with leisure, but he suspected that so-called free time had, in late capitalism, become little more than a recharging period between episodes of labor extraction, i.e., work days. In his estimation, the problem was that leisure had come to serve capital, since now it replenished workers for the sake of work, rather than bettering them as human beings. As a result, he wrote that “‘free time’ is tending toward its own opposite, and is becoming a parody of itself.” The solution for Adorno was for leisure activities to be taken very, very seriously: the study of classical music, for example, for its own sake alone. In this sense, Adorno surely would have despised Wordle and Call of Duty equally. Both are light distractions, after all, and both are deeply integrated into capitalism, perhaps even more deeply than Adorno tended to envision. If there’s one thing worse than free time being organized by the needs of capital, it might be free time becoming a pillar of capital. In that regard, the bonanza that has come from tethering bourgeois word games and logic games (and recipes and so forth) to the consumption of Serious News might seem to bear him out. Leisure in the 21st century is increasingly not just big business, but a framework needed to sustain a free press, along with other public goods that aren’t otherwise going to sustain themselves. The New York Times these days would not be solvent without revenue from subscriptions to its games suite, among other add-ons. And yet, Adorno, as with most matters pop cultural, probably should not have the last word here. All of the above is true, but there are layers to gaming that surpass this analysis. One place to witness these layers is in the substantial number of recent books focused on trivial obsessives, including but not limited to: Claire McNear’s Answers in the Forms of Questions, Adrienne Raphel’s Thinking Inside the Box, Oliver Roeder’s Seven Games, and Eric Thurm’s Avidly Reads Board Games. Each of these books explores communities that treat puzzles and games marketed as leisure activities with a gravity bordering on the spiritual. That is to say, these communities take puzzles and games very, very seriously, as Adorno might have preferred. Most of the games discussed in these books are wedded to capital in some fashion: crosswords are pillars of newspaper financing; Jeopardy!’s success is literally measured in dollars, and it airs on ad-based television; Monopoly dramatizes, well, a scramble for monopolies. Even so, these games’ most ardent obsessives engage them in ways that absolutely exceed the logic of profit. The authors of these books convincingly reveal the potential of leisure—even that which has been partly captured by capital—to offer the promise of dissidence, or at least other spaces of being. It seems significant that these books about word, board, or trivia game obsessives were published just before or during the COVID-19 pandemic. Of course they all must have been researched and written before early 2020, but the pandemic has made them a deeper read. Each book speaks to a quality of game playing that graduated from Adornian pastime to something more like mental-health manna during the depressive depths of 2020. In those depths, the feeling of devoting oneself to something unproductive became humanizing, not as a way to refuel before work but precisely as a fuck-you to work. One’s boss might ask them to give their whole brilliance—honed to a glistening edge by years of training and hard-won wisdom—to QuiznosPitchDeck.xls. But suddenly many people had the courage to reserve the very best of themselves for something else, like Learned League, SpellTower, or Animal Crossing. It is worth asking what comes of playing seriously as a core part of people’s identities in a moment when, for many, their work life no longer fills that role. In the world of puzzles and games, a shift began about a decade ago. Perhaps it started with the calls from BuzzFeed. Or was it the now-defunct startup news website Capital New York? Slate? The Atlantic? All these outlets began sniffing out puzzles. They contacted editors and creators to find out what it would take to establish a serious daily crossword or variety game, usually with some dazzling new digital component. Some even moved forward, with varied success, adding habit-forming, intellect-confirming side hustles to their reporting and commentary, or their listicles. The shift occurred during a moment of nose-diving ad revenues in publishing, when something had to replace the classifieds, or else. In the years since, the New York Times perhaps above all has demonstrated the value of a well-executed newspaper games section, expanding theirs to multiple features and a popular $40 annual subscription. Puzzles and games have become integral to readers’ daily rituals, and in the process also a profitable lifeline for a sinking industry. And the windfall continues: digital puzzle and word game platforms are now (because of course they are) being launched by NFT speculators, VC-backed designers, and venues even larger than the Times. Such financial concerns are far from the minds of the players profiled in Roeder’s Seven Games, which fixes its attention on those who probe the outer limits of games as solvable problems. True obsessives, including Roeder’s subjects, tend to conflate the object of their enthusiasm with life itself. In games both venerable (chess, bridge) and somewhat silly (checkers), serious players pursue transcendence. God comes up a lot. As Roeder memorably puts it, for a game like checkers—which an early computer program helped to effectively solve—a human player making all known correct moves can tie the creator of the universe. The solving of checkers, by some measure, staged how the Romantics viewed nature: as a set of riddles to be joyfully solved through the application of intellect. For some, that moment was a triumph. But each of the seven games in Roeder’s book has a different telos. Some, like Go, are so vast in their possible outcomes that they were long considered incalculable. As devoted as some of its players are, Go continues to require intuition, a win for those anxious about how computing power has surpassed humans in so many games, including chess. For years, Go continued to promise a redoubt for whatever unique capacities humans might offer, although, in the past five years, new software driven by neural networks has closed the gap. Poker, the subject of another of Roeder’s chapters, remains interesting in part precisely because it can never be purely rational: data helps, but even the best-played hand can easily be spoiled by random chance. Still, online poker has become a cesspool of bots and algorithmically informed playing. Poker can never be fully solved, but steering closer to perfection has largely killed the fun. That does tend to happen. I once spoke to an online poker pro who had become so good at the game that he felt he could not justify doing anything else. He calculated that relaxing with an issue of Harper’s cost him an average of about $30,000. Until he quit poker, leisure had become psychologically impossible for him. Sometimes gaming obsessives find euphoria; other times perfection delivers the game to capitalism’s hungriest dogs. Roeder explores the triumphs and tragedies that take place around these fringes. The fuzzy line between gaming and the rest of human lives is also described in Thurm’s Avidly Reads Board Games. Aptly, Thurm opens with an anecdote about playing Catan while waiting for his grandfather to die. The book centers on a concept called the “magic circle,” a means of describing the all-encompassing interiority of a given board game, which a player must give themselves over to in order to participate. Any game, no matter how straightforward, stages a certain version of reality, with particular possibilities and limits, modes of governance, distributions of power, and so forth. The setup of any game, writes Thurm, “model[s] a conflict.” The degree of immersion that a board game creates has often driven designers to lead with ideology, from Nazism to socialism, from empathy for poor people to disdain for them. There are board games that exist to make people feel the rightness of each of these, a thrilling and also potentially terrifying possibility. Even many games that now seem unassuming began as screeds: Monopoly, for example, was created explicitly to teach players that rent-seeking makes everyone miserable. It is a truly American irony that such an offering—despite sucking from the first turn to the sour moment when everyone gives up on finishing—became history’s best-selling board game. Thurm, like Roeder, is interested in extremes, ideological extremes as much as obsessions. McNear examines just one game, Jeopardy!, but it is the grande dame of trivia contests. She is well sourced, and a compelling writer. Answers in the Form of Questions is less of a “Why do people play games?” reflection than the others discussed here. Nonetheless, it engages a world of profound peculiarity, and often one of joy. Likely it is the ubiquity of Jeopardy! that makes it so gripping to learn about. The show is such terra firma that if the reader does not already know about its inner workings, the details are as potentially compelling as those that might be found in a book about the Supreme Court or volcanos. The details matter because the phenomenon is part of our collective experience. It is not a disillusioning book, although it does demythologize the great players somewhat, in addition to revealing fault lines of sexism. Still, McNear mostly portrays a high-functioning and supportive community, and one in which there is much more than money at play and at stake. There is a certain tragedy to the narrative because we know how it ends: with the death of Alex Trebek in 2020, just after the book’s publication, which did not nearly tank Jeopardy! ratings-wise, but which likely removed some of its iconic luster. This, along with the quantification of competition, including “smarter” approaches to training, might have chilled some of the show’s most quirky and democratic vibes. Still, McNear doesn’t tell a story of the show in decline. Rather, she offers an informed backstage tour. If it’s less invested in the question of why such a pursuit matters to competitors, the book is every bit as engaged with the sociology of both fans and players: both are deeply obsessed, and decidedly not in it for the money. Finally, Raphel’s study of the crossword puzzle and its community, Thinking Inside the Box, is similar to McNear’s book in scope (mostly just one game, but a big one), and in its excellent sourcing. But Raphel writes and thinks with an ambition beyond descriptive history. Her early historical chapters give way to deep ruminations on the nature of knowledge in the later ones, and Raphel is not afraid to break from direct prose into poetry or other writerly experiments. She thinks like a professional historian, for example tying the earliest craze for crosswords (which began in the 1920s) to the Anglophone obsession with clues and detectives in 19th-century British literature. For Raphel, crosswords speak to greater orders of knowledge. In a rather sizeable canon of crossword history books, this is the first of its kind, and probably the smartest. Raphel tells us that crosswords have bred their share of obsessives, and it is notable that their obsessions have outlasted the vogue for certain applications of computer-assisted construction that flourished in the 2010s. Crosswords can now be both made and solved by computers in many respects. Still, the game has rather easily survived as a human pastime. In some sense, what lurks today as a potentially existential challenge to the game is not that it might be solved; after all, this already happened in certain ways, and fans mostly shrugged. The real concern is that crosswords might become such a pillar of a rising bourgeois gaming industry that profit could saturate the market, gut the game’s charms, choke constructors on the promise of work-for-hire. As go crosswords, so goes the world? Today, there is a sort of race underway in contemporary word, trivia, and board games. It is a struggle between the forces of profit—which are increasingly hungry for more content—and the forces of obsession—the kind described in these four books—which aim for depth of experience, something beyond the game as a product. Source of the article

GOATReads: History

Did an Enslaved Chocolatier Help Hercules Mulligan Foil a Plot to Assassinate George Washington?

New research sheds light on the possible identity of Cato, the Black man who conveyed the tailor’s lifesaving intelligence to the Americans during the Revolutionary War According to popular lore, Hercules Mulligan—tailor, spy and ardent patriot of Hamilton fame—was working at his Manhattan shop on a frigid night in 1779 when a British officer bustled in, looking for an overcoat. Precious few orders had been placed that day, so Mulligan was happy to oblige, despite the late hour. The intimate act of taking measurements was often eased by conversation, so Mulligan began prompting his customer with a few innocent questions. Suddenly, a boasting tale spilled out of the soldier’s mouth. His days of keeping watch in the freezing cold would soon be over, he was sure of it. In fact, the man bragged, the entire war would end in mere days with a sweeping British victory. What harm would it do to tell his tailor? After all, everyone would soon know about the brilliant plot to capture and assassinate General George Washington. Mulligan calmly recorded the distance from the nape of the man’s neck to his waist and the circumference of his chest, but inside, his mind was racing. After the officer left, Mulligan began taking much more detailed notes, capturing every aspect of the imminent plot the man had described. Missive in hand, he quickly locked up and sprinted down to the docks, searching for his trusted assistant, an enslaved man named Cato, in hopes of spiriting the message out of British-occupied New York City and into the hands of an American contact, thereby saving the general—and perhaps even the young nation itself. This is a compelling and oft-repeated narrative, but how much of it can be proved? How do contemporary historians know that the tailor was a spy at all? And who was the courier with enough bravery and skill to escape New York Harbor and safely convey the intelligence to the Americans? Documents newly discovered by historian Claire Bellerjeau, one of the co-authors of this article, shed light on Mulligan’s life and the possible identity of the Black man believed to have helped him. These papers focus on an individual by the name of Cato, who was enslaved by members of the powerful Schuyler family, then fled New York around the time of the foiled assassination plot. They add to the growing corpus of previously untold and incomplete stories of Black patriots. “These findings provide a more nuanced view of Cato,” underscoring the “role of Black Americans in the Colonies and the new nation,” says Bill Bleyer, author of George Washington's Long Island Spy Ring. New York City fell into British hands in the fall of 1776. Many patriots remained behind enemy lines, and Mulligan wasn’t the only spy in town. The work of a group now called the Culper Spy Ring produced hundreds of letters, more than 190 of which still exist. None of Mulligan’s correspondence survives, but he is mentioned by name in a May 8, 1780, Culper letter from American spymaster Major Benjamin Tallmadge to Washington. Sometime in the previous weeks, a key member of Washington’s spy network in New York, a merchant named Robert Townsend, had suddenly refused to continue in his role as lead spy, citing the danger and stress of his position. His counterpart on Long Island, a farmer named Abraham Woodhull (alias Samuel Culper), needed a new agent in the city, and “Mr. Mulligan” was one of two names suggested, along with a “Mr. Duchie.” Soon after, however, Townsend relented and agreed to continue collecting intelligence, so Mulligan never joined the Culper ring. But the letter stands as evidence that Mulligan was a known patriot and willing operative for the Americans during the Revolutionary War. No other primary source backs up Mulligan’s work as a spy. The earliest telling of his role in thwarting the assassination plot appears in recollections composed by Alexander Hamilton’s son John Church Hamilton, who wrote the following in an 1834 biography of his father: A partisan officer, a native of New York, called at the shop of Mulligan late in the evening, to obtain a watch-coat. The late hour awakened curiosity. After some inquiries, the officer vauntingly boasted that, before another day, they would have his rebel general in their hands. This staunch patriot, as soon as the officer left him, hastened unobserved to the wharf and dispatched a billet by a Negro, giving information of the design. Details about Mulligan’s brave courier, called simply “a Negro” in the initial account, became more elaborate over time. The name “Cato” first appears a century later, in a 1937 book titled Hercules Mulligan: Confidential Correspondent of General Washington. Author Michael J. O’Brien asserted that it was Mulligan who’d enslaved Cato, “whose name has been handed down in tradition,” but provided no additional information about his identity. Enslaved Black men were often assigned names from the classical world, such as Cato, Jupiter and Caesar. These names tended to reflect the enslavers’ desire to mock the powerlessness of those they enslaved and demonstrate their own sophistication and classical education in Greek and Roman history. While narratives concerning Cato’s enslavement by Mulligan and work in his shop are widely accepted today, no contemporary documents exist to prove these claims. Though enslaved people labored in a number of New York City tailor shops around the time of the American Revolution, Mulligan’s workforce in 1774 appears to have consisted of three Scottish indentured servants. Were elements of Cato’s story preserved in oral tradition, or are they merely historical embellishment? In 1790, the first federal census indicated that Mulligan enslaved one individual, with no further details provided. The next time Mulligan appears in the census records, in 1820, he is still listed as only having one enslaved person in his household, an unnamed woman in a column marked “Females of 45 and upwards.” It’s certainly possible that the person Mulligan enslaved in 1790 was Cato, and that this arrangement had existed for at least a dozen years, going back to the time of the Revolution. Perhaps Mulligan later sold or manumitted Cato and then purchased a Black woman prior to the 1820 census—but this seems unlikely. Mulligan was a charter member of the New York Manumission Society, which advocated for the gradual emancipation of enslaved people in the state. His name appears on the organization’s roll in February 1785 and again in May 1787, alongside other notable figures such as Hamilton, John Jay and Townsend. Although Mulligan (and many other members of the group) continued to enslave people, seemingly placing them at odds with the society’s goals, the tailor was less likely to be actively engaged in the buying and selling of enslaved people during that time. An alternative explanation is that the individual listed in both census records was the same woman. By 1820, she may have been too old to be freed under New York state law, which prohibited people over the age of 50 from being manumitted so that the Overseers of the Poor would not be burdened with the expense of their care. (These officials were in charge of the welfare of indigent residents and also certified the manumission of enslaved people.) But what if the Cato of legend was enslaved by someone else? The earliest account does not imply that the Black courier was enslaved by Mulligan, only that he was present at the docks, and the tradition O’Brien cites is quite specific regarding his name. A Black man named Cato Howe is sometimes credited with being the man Mulligan used to send the dispatch. Howe served with the Second Massachusetts Regiment and participated in major military campaigns, like the winter of 1777 encampment at Valley Forge and the 1778 Battle of Monmouth. After his discharge in 1783, Howe returned to Plymouth, Massachusetts, where he lived in a Black community called Parting Ways and worked as a farmer until his death in 1824. While several historians share an accurate story of Howe the Massachusetts soldier, many others conflate this man with the Cato who was supposedly enslaved behind enemy lines in Manhattan and aided Mulligan by passing along crucial intelligence. A fresh examination of the historical evidence offers another compelling possibility for the mysterious Cato’s identity. The research is based on two documents: one, a description of an enslaved man named Cato in the archives of Trinity Church, where Mulligan was an active congregant and later buried, and the other, a newspaper advertisement seeking someone who appears to be this same Cato, who had “absconded”—likely by boat—at the very time of a newly discovered assassination plot against Washington in March 1779. As Steven J. Niven, a historian at the Hutchins Center for African & African American Research at Harvard University, explains, the new analysis “serves as a model for other writers seeking to weave together church and census records, runaway slave adverts, and related fragments of evidence now available through open access digital humanities projects.” After the original Trinity Church building was destroyed by fire in September 1776, congregants began worshipping a few blocks away at St. Paul’s Chapel. Some of the only primary documents from that time in the Trinity Church Archives are the 1782 pew rent ledgers, which record individuals who paid for seats at St. Paul’s, as was the custom at the time. An entry in the ledgers indicates that Dirck Schuyler’s family paid in cash for the seat of a man named Cato. Dirck’s household was a branch of the powerful Schuyler family, whose members had been important figures in New York society and politics for generations. The fact that the Cato named in the church records is not listed as a Schuyler relative and lacks a last name of his own indicates that he was almost certainly an enslaved Black man who was held in high enough regard by the family to be allowed to sit in their pew rather than being consigned to the balcony with the other enslaved people. Only two other enslaved people were recorded as sitting in the expensive pews of St. Paul’s in 1782; both are women, described only as “DePeyster’s wench” and “Roomer’s wench.” Dirck and his wife, Ann Mary Schuyler, were members of Trinity Church and successful chocolatiers in Manhattan, with a shop on Maiden Lane, not far from Mulligan’s tailor shop on Queen Street. Their chocolates, sold “by the box or less quantity,” according to an advertisement, were a product of the labor of Cato and other skilled enslaved people. Enslaved workers were key to the Colonial chocolate business: A 1772 ad sought to purchase a “valuable” Black man who had been “bred to the chocolate-making business,” while a 1775 listing announced the sale of a 32-year-old Black man who “understands the business of chocolate-making.” That work would have included hefting heavy sacks of cocoa beans; grinding cocoa powder in a large mill; and even rendering animal fat slowly over a fire to produce tallow, which was used to give the chocolate a creamy texture. Maiden Lane was home to several other chocolate makers, such as Mark Murphy, who advertised “chocolate of the first quality,” and Peter Low, who concluded a 1771 fugitive slave ad by reiterating that he “continues to make and sell chocolate, at his house on the upper end of Maiden Lane, near the Broadway, where those who please to favor him with their custom may be supplied with that which is good, on reasonable terms.” Mulligan, meanwhile, was a vestryman of Trinity Church, meaning he was one of the parishioners who managed the administration and financial concerns of the congregation. He was also a close friend of Hamilton, whose future wife, Elizabeth Schuyler, was a distant cousin of Dirck. While Mulligan may not have been close with this particular Cato, he was almost certainly aware of him. Interestingly, an enslaved man named Cato appears to have gone missing around the same time as the plot to assassinate Washington. On March 24, 1779, Ann Mary (by now a widow) ran a notice in the Royal Gazette seeking information about the escapee: Absconded from his mistress, the widow of the late Dirck Schuyler of this city, a Negro man named Cato, alias Joshua, about 30 years old, strong-made and of a yellowish complexion, and is well known in this city, he is designed to go a privateering, and I do forewarn all captains of privateers and others from harboring or concealing him. The timing of this ad is not insignificant. While historians have long known that the British formulated an assassination plot against Washington in 1779, the exact date was unknown. In 2023, however, three students at James Madison University uncovered a Virginia court document apparently connected to the plot. Dated March 13, 1779, the bond agreement references an upcoming trial for a conspiracy “to murder George Washington and the honorable members of the Continental Congress.” Given this timeframe, could it be that Cato, who appears to have had ambitions to make money at sea as a privateer, was the same Black man whom John C. Hamilton described as being present at the docks to transport Mulligan’s dispatch across the water? And is Cato’s reappearance in the Schuyler pew in 1782 evidence that his absence from the chocolate shop was only temporary, as he assisted Mulligan with spying activities, and not a full-fledged escape from his enslavers? The timing seemingly aligns, and according to the ad, Cato was “well known in this city.” Was he well known enough for Mulligan to trust him with such an urgent and dangerous message? Knowing that Cato was a member of the same church might have given Mulligan confidence in the man’s character, and a further examination of his alias may provide a clue as to his intentions. In addition to the name given to him by his enslaver, Cato seemingly chose a biblical name for himself: Joshua, according to the ad placed by Dirck’s widow. The selection of that particular name could be significant. In the Hebrew Bible, Joshua is an enslaved Israelite who escapes Egypt in the Exodus and becomes one of the faithful spies sent to scope out the land of Canaan in the Book of Numbers. Later, it is Joshua who assumes leadership of the Hebrew people after the death of Moses and leads them into war to conquer the Promised Land. It doesn’t require a great leap of faith to imagine a regular churchgoer like Cato recognizing the symbolism such a name might carry in the context of his own life. Elizabeth J. West, a literary scholar at Georgia State University, says that Cato’s adoption of the name Joshua exemplifies “a legacy of enslaved African Americans proclaiming their agency” by reinterpretating the stories of biblical heroes and heroines. She adds that his life offers a lens through which to view “the everyday role of African Americans [in] business and industry in Colonial New York.” Niven, of the Hutchins Center, connects the new research to “the recent flowering of local history projects and deeper investigations of forgotten people of color in the Colonial archive.” The analysis “carefully separates long-established myth-making with a nuanced microbiography of the skilled chocolatier Cato,” he explains. We will likely never know for certain whether Cato, the enslaved worker in the Schuylers’ shop, was also a spy for the Americans whose courage and skill on the water helped thwart a credible assassination plot against Washington. Whatever the case, however, Cato stands as evidence of the many often “invisible” figures whose presence, skill and labor helped shape the course of the Revolution—and the history of the United States. Source of the article

GOATReads: Psychology

Why Young Minds Are Breaking Faster Than We Can Mend Them

Key points Youth happiness has collapsed, with no promise of the U-curve of happiness applying to the current generation. Today’s teens face a true poly-crisis world—from COVID scars to AI upheaval, all during their formative years. Uncertainty will remain a constant, and resilience requires breadth and adaptability over narrow mastery. Preparing youth for uncertainty means optionality, offering more doors to walk through. How we experience happiness used to follow a predictable rhythm. For decades, researchers across psychology and economics identified a stable “U-curve” of life satisfaction. We start off high as children, dip in early adulthood under the weight of responsibility and uncertainty, and rebound in old age once perspective softens the sharper edges of life. This was never some immutable law of nature, mind you. It was an artefact of the way our lives stacked up. Bills and childcare and career ambition tend to collide in the middle years, only to give way later to more freedom, more perspective, and more time for ourselves. These golden years still exist, at least for prior generations reporting them today. What has changed, as David Blanchflower and others now show, is the left side of the curve, which now traces more of a 45-degree angle than a U. Youth are now beginning their lives in a state of ill-being, and unless we change the conditions around them, we might see the trajectory they trace flatten out entirely. From U-Curve to Free Fall: Why Youth Can't Find Footing The data could not be clearer: Rates of anxiety, depression, and even suicidal ideation among adolescents and young adults are rising to historic highs. Edgar Morin and Anne Kerne once wrote about “poly-crisis” as an abstract condition, but for today’s youth, it is their lived reality. A childhood shaped by COVID-19’s isolation has led to an adolescence haunted by climate change headlines, geopolitical conflict, and now the uncertainty of artificial intelligence reshaping the very careers they are told to prepare for. At the very moment they are meant to be building identities and establishing agency, the ground beneath them shifts with unnerving speed. Erik Erikson described adolescence as a crucible for identity formation, but what happens when that crucible is filled with nothing but volatility and no roles for them to fill? A wealth of research shows that adversity during formative years leaves deeper scars than adversity later in life, and when the world itself feels like quicksand, it is no wonder that life satisfaction is slipping away. The U-curve once promised that life satisfaction would return with age, but even that promise might no longer hold. With uncertainty only deepening, the risk is not just a dip, but a decline that keeps steepening over time. This generation may not rebound—unless we intervene with purpose, preparing them not for stability that may never come, but for uncertainty that surely will. Teaching Survival Skills for the Unpredictable Century Uncertainty asks more of young people than we are giving them today. Our institutions are still structured for a linear world that no longer exists, beginning with degree programs that assume a straight line from school to a job turns into a career. Yet the jobs themselves are being hollowed out, first by automation, then by outsourcing, now by AI, and even the ones that remain are often left with less meaning to give. We do our youth a disservice by shepherding them into ever-narrower tracks—from elite sports to ambitious extracurriculars and hyper-specialized majors, as if life still rewards those who grind out their 10,000 hours in one domain. What looks like mastery too often ends up as sunk cost, and when the promised future job disappears, the years invested turn into years wasted. The antidote we need is fostering resilience through diversity of skills, not depth in one alone. We need more polymaths who are resilient through curiosity instead of specialists who are fragile by design. Lateral thinkers, polymaths, and those comfortable moving across domains fare better in uncertain environments. Studies in organizational psychology show that individuals with more diverse learning experiences demonstrate greater resilience, adaptability, and creative problem-solving under stress. Breadth, not just depth, is the new survival skill, and we need to get a move on with teaching it. That means exposing young people to the messy realities of the real world much earlier than we do today. We need to, for example, set them up with internships that show them work in progress rather than perfected career paths, and they deserve curricula that incentivize exploration as much as performance. The most powerful gift we can offer is optionality and the ability to open more doors, because none of us knows which ones will stay open. Change will also require us to rethink our cultural narratives around passion and performance. “Follow your passion” has become a form of performative peer pressure, where not having one at 18 is seen as a fatal blow against any career hopes one might have. Yet today, being stuck with just one passion is equally dangerous. Flourishing under uncertainty will mean normalizing the winding path, rewarding curiosity, and valuing the skill of adaptation as highly as the skill of persistence. Adapting to Life in a New World The poly-crisis world isn’t going away, but what can change is how we prepare the next generation to live within it. If the old model was to promise stability, the new model must be to cultivate adaptability. We cannot erase uncertainty, but we can equip them to thrive inside it. And if we do not, then the breaking will continue, much faster than we can mend it. Source of the article

Betwixt or Bewitched? Rethinking the “Middlebrow” with Dino Buzzati

Once upon a time, there was a journalist with “a weakness for the literature of the supernatural, magic, ghost stories, mysteries.” So begins a short story by 20th-century Italian journalist and writer Dino Buzzati, about a man who could very well be Buzzati himself. The character, who has “an abominable passion”—journalism—finds a booklet at a junk dealer’s containing a magic formula: If read aloud, it grants “a superhuman power.” That journalist may be a thing of fiction—he’s from Buzzati’s 1962 story “The Ubiquitous”—but the impulse behind him clearly is not. Buzzati, now considered a major figure in modern Italian literature, was for decades a reporter and special correspondent for the Italian newspaper Corriere della Sera, where he was known for his entertaining dispatches and his stubborn belief that reality was no less uncanny than fantasy. To him, journalism and storytelling were two sides of the same coin, each a way of brushing up against the extraordinary. This eclectic approach was not without its critics, both in Italy and abroad. As one of his English-language translators, Lawrence Venuti, has noted, Buzzati’s fiction was already being dismissed in 1958 for triggering nothing more than a brivido borghese in his Italian readers. The charge came from Paolo Milano, writing for the magazine L’Espresso. This “bourgeois shudder,” Milano claimed, only offered a fleeting moment of unsettling recognition, when “even the life of a right-thinking [bourgeois] is … visited by a few ghosts.” But these ghosts, he argued, are quickly and “sagely exorcised” by Buzzati, “a writer who shares” the same “sense of caution” as the middle-class consumers of his stories. His “horror museum,” after all, is carefully curated, displaying only “risk-free emotions.” But is that really the case? What if we entertained the shudder—took it seriously, on its own terms? And how are we to do so, especially today? These questions feel even more pressing in the Anglophone world, where Buzzati’s fame has long been as elusive as the meaning of his tales. The stark contrast between his relative neglect in English and his popularity across Europe—and eventually in his native Italy—was first noted by Venuti in the introduction to Restless Nights: Selected Stories of Dino Buzzati (1983), where “The Ubiquitous” first appeared. That imbalance has now begun to shift. Since 2023, NYRB Classics has launched a translation campaign, reintroducing Buzzati’s fantastical fiction to English-speaking readers. The project includes a reissue of Joseph Green’s translation of Un amore (A Love Affair, 2023); Venuti’s new translation of Buzzati’s masterpiece Il deserto dei Tartari (The Stronghold, 2023); Anne Milano Appel’s 2024 translation of the sci-fi novella Il grande ritratto (The Singularity); and the 2025 publication of an expanded corpus of fifty stories (including “The Ubiquitous”), once again selected and translated by Venuti: The Bewitched Bourgeois. This new translation project doesn’t just revive Buzzati—it reframes him. In the 1980s, when Venuti first approached Buzzati’s short fiction in English, the goal was to import a productive strangeness—what he calls a “foreignizing” flavor—into the Anglophone literary tradition. In 20th-century Italian literature, Venuti explains in his classic The Translator’s Invisibility, the genre of the fantastic had firmly taken root through writers like Massimo Bontempelli, Italo Calvino, and Tommaso Landolfi. But it remained marginal in the British and American canons, where realism continued to dominate. Buzzati’s stories, by contrast, blend everyday settings with phantoms, spells, and surreal incidents narrated with deadpan precision. In English, they offered an avenue for unsettling the expectations of Anglophone readers and expanding the boundaries of realist writing. Meanwhile, in 2025, as a broader range of Buzzati’s short stories becomes available in English, a new kind of strangeness is being imported. The renewed target of this “foreignizing” mission seems to be middle-class or bourgeois lifestyles and tastes—along with their “shudders,” as Milano might put it—and especially the cultural machinery that upholds them: the middlebrow. The very title of the collection The Bewitched Bourgeois offers a clue. For Venuti’s choice of the French borrowing bourgeois—“a very difficult word to use in English,” as Raymond Williams once described it—to translate the Italian “borghese” is, itself, a foreignizing move. As it invokes the French sociological tradition, where “the petit-bourgeois relation to culture” has been defined by its “capacity to make ‘middle-brow’ whatever it touches,” this borrowed term also gestures toward the ambivalence—and resistance—that the “middlebrow” itself displays toward translation. “The English word,” as Diana Holmes writes, “with all its derogatory connotations, is … unmatched in any other language hence never quite translatable.” And yet its hybridity and slipperiness, and the largely shared conditions of its rise across Europe, the United States, and beyond, make the “middlebrow” quite amenable to transnational migration. Far from obsolete, the middlebrow is still widely consumed and hotly debated within and outside the academy. It carries a certain cultural stigma—think of Franzen snubbing Oprah’s Book Club—while continuing to demand attention as a serious object of study. Though often examined within national traditions—particularly Anglo-American—its defining features are, as Beth Driscoll argues, shared and transcultural: The middlebrow is middle class: participation requires education, income, and leisure. The middlebrow is aspirational, respectful of culture and keen to connect with it, while at the same time commercially oriented. …. The middlebrow is emotional, enabling cultural activity that is sentimental, empathetic, and therapeutic. The middlebrow is recreational, framed as domestic or entertaining rather than academic, and it is earnest, promoting a sense of social and moral responsibility. The term originated in 1920s Britain, when the literary magazine Punch mocked “a new type” of BBC listener with goals of self-improvement. Virginia Woolf, too, referred to a type of consumer—the “common reader” of her 1925 essay—who reads for pleasure and “is guided by an instinct to create for himself.” Over time, the label expanded to encompass not only Woolf’s common reader but also the institutions and cultural products designed to serve them—works imitative of “high” culture but stripped of formal experimentation. Buzzati’s own “Panic at La Scala” (1948) skewers these cultural pretensions: The story depicts the Milanese bourgeoisie at the opera, eager to consume a performance “certain of success, not excessively demanding, chosen from the traditional repertory with complete confidence.” It was this cultural ambivalence at best, and promiscuity at worst—what Woolf in 1942 called “betwixt and between”—that made the middlebrow so detestable to highbrow critics. In the United States, the term gained traction only after World War II. As scholars Cecilia Konchar and Tom Perrin note, “[a]ll of the best-remembered critical discussions of the middlebrow in the US date from the mid-century,” including Russell Lynes’s well-known article “Highbrow, Lowbrow, Middlebrow” (1949) and Dwight Macdonald’s polemical Masscult and Midcult (1960)—republished in 2011 by NYRB Classics, the same series now championing Buzzati (the evergreen paradox of the middlebrow…). This overlap feels more than accidental. Macdonald’s notion of “Midcult”—a hybrid form that blends popular appeal with the trappings of “high” culture—is one Venuti explicitly connects to Buzzati in his introduction to The Bewitched Bourgeois: His [Buzzati’s] steady output of feuilletons gave him the opportunity to combine elite literary traditions with popular genres, a mix that the leftist cultural critic Dwight Macdonald would have called “Midcult.” … In his stories, Poe and Kafka meet Rod Serling’s television anthology series The Twilight Zone. But how useful is it to apply an Anglo-American cultural label to an Italian writer whose literary landscape lacked a direct equivalent? If the middlebrow in English is a well-defined cultural battleground, in Italy it rather resembles one of Buzzati’s phantoms: something we sense but cannot fully grasp. Italian critics preferred terms like “letteratura di consumo” or “d’intrattenimento,” which sidestepped the anxieties “middlebrow” stirred in Anglophone debates. But just because the local intelligentsia didn’t name such anxieties, it doesn’t mean they didn’t feel them. Already in 1929, a few years before Buzzati began writing fiction, the Italian literary critic Raffaello Franchi wrote a polemical pamphlet of limited circulation that looked back on the first three decades of the century, L’europeo sedentario (“The Lazy European”). In it, he implicitly lamented the lack of a respectable middle ground between the “spectacular provincialism” of Giovanni Papini—whose Storia di Cristo (1921), published as Life of Christ (1923) in the US, became a global success—and the “snobbish modernity” of Achille Campanile, known for his surrealist puns. Far from equating this “middleness” with accessibility, however, Franchi blamed the commodified intermingling of “high” and “low” art on the very class Buzzati belonged to and would soon put center stage in his fiction: Unfortunately, there’s always some bourgeois out there ready to be bewitched [qualche borghese disposto a farsi abbagliare]—and what’s more, paying for it—as is typical in a century that’s become so commercial and difficult. These anxieties only intensified during Italy’s postwar economic boom. In Apocalittici e integrati (1964), Umberto Eco addressed the Italian culture wars—between the pessimists who rejected mass culture entirely and the conformists so immersed in it that they renounced any serious aesthetic mission—without ever naming the “middlebrow” as their possible solution. Instead, he took some of Macdonald’s concerns seriously: A certain type of Midcult, Eco summarized, “pacifies its consumer by convincing them that they have experienced an encounter with culture, so that they do not entertain any further concerns.” Walter Pedullà took this even further in La letteratura del benessere (1968), condemning Buzzati’s short stories as “sedative tales.” For Pedullà, Buzzati’s short fiction exemplified the “literature of well-being” in vogue in 1960s Italy: one that, while reflecting real progress—economic growth, increased equality, cultural democratization—was ultimately “soporific” in its lack of political engagement. Associating Buzzati with the Anglo-American category of middlebrow certainly makes sense in this light. After all, he was a member of the middle class, writing about and for the middle class. His prose, mixing the “high” with the “low,” is spare and linear, much like Hemingway’s—whom Macdonald considered the epitome of Midcult and whom Venuti took as a stylistic model for The Stronghold. Even Buzzati’s experiences as a reader leaned toward entertainment over experimentation: As he once told Yves Panafieu in an interview, he respected “the authors of detective novels far more than certain authors who have won the Goncourt Prize”—because literature shouldn’t bore. Still, there’s something about Buzzati’s middlebrow that does not quite fit the Anglophone caricature. His fiction is neither comfortable nor sentimental—it is eerie. Far from sensationalizing or distorting the real world for effect—a “Midcult” trope which Macdonald called “parajournalism” (“a bastard form, … exploiting the factual authority of journalism and the atmospheric license of fiction”)—Buzzati uses the fantastic to illuminate deeper existential and societal anxieties. Take, for example, his story “Elephantiasis” (1967). Written partly as a popularizing scientific pamphlet, partly as a news piece, it addresses a familiar obsession, recounting surreal incidents that “had a vast echo in the press, radio, and TV”—the very vehicles of middlebrow culture—as plastic-made cars, infrastructures, and even children’s toys begin to grow uncontrollably in Italy, America, Japan, and Tanzania in a not too remote 2042. Buzzati’s middlebrow is not simply “betwixt,” as Woolf would call it, but bewitched. Less a diluted compromise between “high” and “low” than a threshold between reality and nightmare, his middlebrow offers a space for enchantment and estrangement. Instead of celebrating traditional bourgeois values such as earnestness or self-improvement, Buzzati’s stories provoke existential bewilderment, as his characters become trapped in looping fables. Giuseppe Gaspari, the “bewitched bourgeois” of the homonymous 1942 story that also gives the title to the collection, epitomizes this condition. Whilst vacationing with his family, Gaspari comes across a group of boys playing war games and, out of nostalgia for a bygone childhood, decides to join them. Yet he participates with such intensity that the game suddenly becomes real, and an arrow strikes him down. “This is not absurd,” Buzzati ominously told Panafieu. Buzzati’s middlebrow doesn’t aim to reassure so much as to disorient. His stories hinge on a rupture—an uncanny vision, an unexpected visitor, a strange discovery—that suddenly reshuffles the mundane. These moments jolt characters out of their routines, breaking into what he calls the “solid bourgeois world” in “The Scandal on Via Sesostri” (1965). It’s a realm populated by “esteemed professionals” and their “irreproachable wives,” who are momentarily forced to reconsider their prerogatives—to imagine, if only for a second, alternative ways of living. That story chronicles, with the precision of investigative reportage, the unmasking of Enzo Siliri, a dead doctor and Nazi collaborator who had lived under the false identity of respected doctor Tullio Larosi. In this bewitching landscape, Buzzati probes a perplexing postwar reality, in which real-life “esteemed professionals” were being exposed as complicit in the crimes of the Axis powers. And he does so much as the fictional “Dino Buzzati” would do in the story “The Ubiquitous”: by mixing journalism with storytelling, reality with fantasy. It’s what his translator Venuti would celebrate in the 1980s as Buzzati’s “Fantastic Journalism.” Reframed as a “bewitched middlebrow,” Buzzati’s fiction re-enters literary history not as a comforting escape, but as a sharp tool for existential inquiry across geographical borders. “Appointment with Einstein” (1950), for instance, channels mid-century American racial anxieties through an Italian lens. In the story, the physicist has a surreal, disquieting encounter with a gas station attendant in Princeton—bearing “African features” and cast through the troubling stereotype of the “black devil”—who holds Einstein accountable for the moral implications of his theories, which ultimately contributed to the creation of the atomic bomb. In translation, the story gains an added layer of estrangement, forcing both Italian and American readers to reckon with history from a defamiliarized, foreignized perspective. Buzzati’s treatment of technology—the engine of mass culture—is where his fiction most powerfully unsettles traditional middlebrow sensibilities. In The Poetics of Genre in the Contemporary Novel, Roger Bellin compares the use of “borrowed science-fictional elements and topoi” in contemporary novels like Jennifer Egan’s A Visit from the Goon Squad and Spike Jonze’s Her to “an earlier form of upper mass-market literary borrowing—the middlebrow.” Yet Buzzati’s version of “techno-anxiety” is often stranger and more disruptive than this “new middlebrow,” which turns “low-postmodern pastiche” into a more palatable and heterogeneous type of commodity. In “The Time Machine” (1952), a scientist creates a device that slows down time, allowing the wealthy to age at about half the normal rate. A district called Diacosia is built around the machine—but life there soon becomes dull, insular, and disconnected from the world, until the machine malfunctions and the experiment ends in grotesque disaster. Time—simply a means to an end for the bourgeois, an end that is money—is exposed as a lost cause, belatedly recognized as irretrievable. The middle-class Buzzati firmly rejected revolutionary politics. And yet, his fiction still captures the spectral logic of a system where even time is taken for granted, devalued or commodified: a “bewitched world of capital,” as Giacomo Marramao might call it. What does it mean, then, to reintroduce Buzzati’s “foreignizing” fiction to English-speaking readers now? This isn’t just a question about genre—realism versus the fantastic—but about how and why we read. The growing availability of Buzzati’s “bewitched brows” in English brings with it an invitation to consider alternative modes of reading beyond the one that, as of 2025, still dominates Anglo-American literary criticism: close reading. This practice “persists in published criticism” and “remains a staple in the classroom,” as Rita Felski argues; it distinguishes the “tribe” of literary scholars from allegedly less alert reading communities, by a capacity “to disenchant, to mercilessly direct laser-sharp beams of critique at every imaginable object.” Enchantment—and its more disturbing kin, bewitchment—have long been left off the critical agenda, dismissed as “bad magic.” The stakes of this exclusion, Felski notes, seem even higher for cultural historians, who hold fast to the “sacred tenet that the popular consumer is an alert and critical consumer”—never a gullible one. That Buzzati himself has faced such criticism—blamed for “sedating” readers, as Pedullà put it—may help explain his long marginalization in literary studies. His fiction doubtless invites us to read for pleasure, to look at the world as “an enchanting place,” as the narrator tells a Martian diplomat in “Why” (1985). So does Felski, who compares enchantment to “a shuddering change of gear,” the jolt of returning to reality after full immersion in art (in a phrase that recasts, with a nod of optimism and irony, Milano’s earlier critique of the “bourgeois shudder”). Modern readers, Felski writes, are “bewitched but not beguiled,” aware that their experience is fictional, yet still moved by it. But in Buzzati’s world, the shift between fiction and reality is rarely this clean. It is suspension of belief that brings Giuseppe Gaspari, the bourgeois of “bewitched” fame, to death—“the price for his arduous enchantment,” the narrator comments. The twist is made even more disturbing by the story’s basis in a real event Buzzati covered as a naval correspondent in 1942. Like Gaspari, Buzzati was also watching kids playing in the woods when suddenly a loud rumble made them halt—and made him realize that perhaps, “beyond the last visible waters,” a real war was unfolding. Buzzati is acutely aware that narratives are not harmless. In “Stories in Tandem” (1970), he portrays an old doctor and the narrator taking turns inventing a story to pass the time, each picking up where the other leaves off—until they witness a real plane crash and begin blaming each other for the fictional conversations that foretold the catastrophe. Storytelling here is shown as eerie, prophetic, and never entirely innocent. He similarly plays with his readers’ expectations—and their presumed superficiality—in “The Prohibited Word” (1956), where the narrator’s friend Hieronimo refuses to utter a forbidden word, never spelled out in the story itself. “And what about my readers?” the narrator/Buzzati wonders. “Won’t they notice if they read this dialogue?” “They’ll simply see an empty space,” answers Hieronimo. “And they’ll simply think: ‘How careless. They left out a word.’” In both stories, Buzzati confronts the reader with the limits of their interpretive habits. He uses these moments to expose the consequences of our complacency—to suggest that stories, far from offering comfort, might implicate us in what we fail or refuse to notice. What do we do with the fantastic today, when so much of what once seemed unimaginable—artificial intelligence, mass surveillance, climate breakdown—is now woven into the fabric of everyday life? If the “civil world” was “eager to exterminate” the few surviving fantasies of 60 years ago, as the narrator of “The Bogeyman” (1967) warns, what can we say of our own increasingly “uncivil” world, where the boundary between fiction and reality—and journalism itself—is constantly manipulated? Buzzati wrote about ghosts, talking machines, and surreal bureaucracies. In his world, plastic grew like cancer, and stories could kill. These tales weren’t meant to comfort the middle classes; they were meant to shake them. And today, they shake us too—like a shudder. They remind us that the middlebrow, so often dismissed as safe or sentimental, can carry a sharp edge. It can unsettle, question, and estrange. Calling Buzzati’s fiction “bewitched middlebrow” is one way of reclaiming this space: not as a bland middle ground, but as a threshold where ordinary life slips into the uncanny. Reading Buzzati now matters not just because his stories are strange—and help make the canon strange again—but because they speak to the stories we still tell ourselves. His middlebrow doesn’t offer reassurance or easy answers. It demands that we keep looking. Source of the article

Forwards, not back

Medicine aims to return bodies to the state they were in before illness. But there’s a better way of thinking about health The ancient Greek physician Hippocrates defined health as a body in balance. The human body was a system of four coordinated humours (blood, black bile, yellow bile, phlegm), each of which had its own balance of qualities (hot or cold, wet or dry) connected to the four elements (fire, air, earth, water). The balance of these qualities and humours was inextricably linked to environmental conditions. Phlegm, for instance, was connected to the element of water, and had the qualities of cold and wet. Hippocrates recognised that when this bodily system was exposed to conditions that perturbed or exacerbated its humours, such as the cold and wet of a snowstorm, the system became unbalanced, in this case producing too much phlegm. His treatments for an abundance of phlegm focused on removing the excess humour and increasing the yellow bile (hot and dry) to bring the body back into balance. Today, we might identify such a condition as a respiratory disease. Hippocrates would not have identified the condition so locally. He would have considered it a condition of the whole body: ‘dis-ease’ was literally a system out of ease or balance. Today we think less about the whole body as a complex system and more about its parts and sub-systems. An X-ray of the lungs shows congestion, so we assume the problem is in the lungs and treat for pneumonia, not worrying about other parts of the body unless they show their own symptoms. If a culture in the pathology lab shows a particular microbe at work, we prescribe the relevant antibiotics to combat that microbe rather than attempt to rebalance the system as a whole (despite the current trend to add ‘probiotics’ to the antibiotics approach). This move toward localised diagnoses began in the 18th century with the rise of morbid anatomy and the localisation of symptoms to parts. The French physician René Laënnec’s invention of the stethoscope in 1816 allowed him to hear various abnormal sounds in the chest of a patient suffering respiratory distress. He could then carry out an autopsy after the patient died and correlate the visibly abnormal parts with the symptoms that he had detected, which led to powerful diagnostic tools. The result was the localisation of abnormalities to particular parts of the body, which led to the concept of localised diseases and the need for specialised treatments. Modern medicine celebrates the ability to diagnose problems based on localisation of symptoms. The systemic imbalance or ‘dis-ease’ of Hippocrates thus became a growing set of separate diseases associated with specific causes, dissociated from the notion of the body as a system. Following the localisation of diseases came the specialisation of treatments. These treatments typically rely on identifying the problem within a particular part or subsystem and fixing it, with the goal of getting the patient back to ‘normal’. If a physical part is damaged, we want to repair it so that it works again as closely as possible to the way it was working before. Or, if it cannot be fixed, perhaps because a limb was cut off or a kidney failed, then the tradition has been to replace it with something as close as possible that will work the same way: prosthetics for lost limbs, dialysis machines for failing kidneys. Despite vast differences in understandings of disease and treatments, models of health from Hippocrates to modern medicine have focused on reestablishing the same state as occurred before illness. Health is a concept imbued with a sense of stability; it is constituted by a body returning to a state that existed before disease or injury and the maintenance of this state. Stability is therefore understood as the maintenance or regeneration of a single, ‘healthy’ state of the body. Since this approach has saved vast numbers of lives, medicine applauds the invention of new analytical tools, procedures and treatments that advance an understanding of health as a return to a previous state. The sense of stability implicit in thinking about health leads to a picture of health as an outcome of regeneration: a body damaged by injury or disease is brought back to, or regenerates, a previous, ‘healthy’ state. But what if health isn’t simply a return to a previous state? If we think about health as part of a larger framework of considering organisms as complex systems, there is no ‘return’. Complex systems shift in response to environmental challenges; they adapt to their conditions in order to survive – and adaptation breeds change. Framing health in terms of regeneration, and then asking what it means to regenerate, allows us to prod our assumptions about health as a singular, predetermined outcome and rethink our values in sustaining complex systems in light of damage. That raises the question: what does regeneration mean? In 2017, we formed the McDonnell Initiative at the Marine Biological Laboratory in Woods Hole, Massachusetts, which brought together historians, philosophers and scientists to investigate what regeneration means across multiple scales of life. Regeneration is typically thought of as renewal, revitalisation, rejuvenation, repair, recovery and a lot of other re- words. The ‘re-ness’ suggests return to prior conditions. Together, the authors of this essay explored the concept in the history of biology, asking what regeneration has meant when applied to individual organisms, especially through the 20th century. Others in the McDonnell Initiative have been looking at what regeneration means for microbial communities, germline, nervous systems and ecosystems. In our book, What Is Regeneration? (2022), we laid out ideas of regeneration arising in the 18th century and looked closely at the contributions of early 20th-century biologists. Led by René-Antoine Ferchault de Réaumur and Abraham Trembley, 18th-century naturalists made meticulous and detailed observations of organisms responding to injury. In the first decades of the 18th century, Réaumur watched as crayfish limbs slowly regenerated, documenting different stages of regrowth that led to a complete replacement of the lost limb. A few decades later, Trembley cut hydra into various pieces, and discovered that they regrew all of their lost parts. Writing in 1989, Howard and Sylvia Lenhoff convincingly argued that Trembley saw the organisms he worked with as living systems – he looked at the interacting whole, its structure, function and behaviour. Réaumur and Trembley, and others at the time, saw regeneration as a process of repair and replacement, bringing organisms back to their previous states, akin to the notion of stability we see in medicine. In 1901, the American biologist Thomas Hunt Morgan summarised what was known about regenerating organisms, from both his own studies and preceding explorations. Morgan experimented on a broad swath of organisms, building up evidence for his argument that scientific claims should be grounded in experimentation and devoid of theoretical speculation. Perhaps his most stunning insights came from planarian worms. After he cut them into various bits, he found that they could regenerate, producing a new tail, middle or even a fresh head, if needed. Some experiments even showed planarians regenerating a second head on the posterior of the old head after the body had been removed, a marked departure from the planarian’s previous state. A stable system might in some senses be even more healthy than what was there before What about hydra? Morgan turned them inside out, cut-and-pasted various parts together, and they kept growing and remained alive. These were most definitely not in the same condition as they were before his experiments – their bodies, through the regenerative process, had transformed into new and curious systems. Or, as Morgan put it, ‘a change in one part takes place in relation to all other parts, and it is this interconnection of the parts that is one of the chief peculiarities of the organism.’ Morgan joined other researchers at the Marine Biological Laboratory in marvelling at the variety of responses these organisms could make when responding to changing contexts and conditions. One of those biologists was Jacques Loeb, who saw the potential to go further than just experimenting and describing what the organisms did. He imagined being able to engineer the results, to use the scientific knowledge about what organisms do during regeneration in order to control the results of this process and improve on what happened normally. The result would be a stable system that might in some senses be even more healthy than what was there before. The works of Morgan and Loeb offer definitive shifts from previous views of regeneration, and they show us two important things. First, they brought to fruition Trembley’s early views that regeneration is a process that occurs within a living system. A system is a group of parts that interact in a coordinated fashion. The resulting whole follows rules and principles, which allow some kind of communication and integration of the parts so that the entire system is responsive and regulated as well as stable. Stability, then, meant not a return to a previous state, but the ability of a system to maintain its coordination and communication among parts. Morgan and Loeb understood that an intervention on one part would, in some way, affect the entire organism, much in the same way that Hippocrates understood that disease affected the entire body. Second, Morgan and Loeb show us that regeneration does not simply mean a return to ‘normal’ like the 18th-century naturalists believed. Rather, they saw regeneration as an adaptive process. These men were not evolutionary biologists and did not think of adaptation in evolutionary terms of species adapting to a changing environment over geological periods of time. Instead, they emphasised adaptive responses of individual systems to stimuli affecting the individuals, a more proximal sense of adaptation. They did not think in terms of replacing particular localised cells or tissues or organs, but rather in terms of stimulating the organism to respond by repairing damage for the whole. They saw the individual organism as a complex system, and one that adapts to its changing environment by initiating repair. Skipping forward toward the 21st century, we find many life scientists embracing this notion of regeneration as a process in adaptive systems. Research on gastrointestinal microbial communities has shown that when the community is perturbed by antibiotics, it will reform, but the microbes will be different, and so will their community interactions. Experiments on lampreys indicate that, when spinal synapses regenerate following injury, the regenerated synapse is morphologically distinct from its pre-regenerative state, even if it performs much the same function. All of this research indicates that regeneration is, broadly conceived, the adaptive process by which a living system responds to a stimulus and maintains its stability. Adaptive, in this context, means that systems change through the process of regeneration in response to both internal and external conditions. Stability means that the system remains able to coordinate its parts. A system before regeneration will never be identical to a system after regeneration; there will always be changes in the parts or the relationships between them, even if they look or function the ‘same’. What appears to shape the results of regeneration is context. Whether that context is the microenvironment of a cell, a tissue, the entire body, the body within its environment or all of these, context is an essential part of regeneration within the complex system because it provides the fodder for the adaptive response. As the biologist Michael Levin and the philosopher Daniel Dennett pointed out in their Aeon essay: ‘The great progress has been mainly on drilling down to the molecular level, but the higher levels are actually not well-off … We know how to specify individual cell fates from stem cells, but we’re still far from being able to make complex organs on demand.’ Despite decades of enthusiasm for molecular biology and genetics, we can’t rely on knowledge of the molecular alone – we also need to be able to understand how molecules, cells, tissues and the body within an environment interact in order to move towards repair of individual organisms. And it is not just the higher levels of the individuals’ structure and function that shape adaptive responses to environmental changes, but also the context in which they exist. Because living systems regenerate by changing and adapting to both internal and external stimuli, understanding health as simply about stability in the sense of being fixed, as recovering a system to its condition before a stimulus, doesn’t quite make sense. The concept of regeneration embodies Heraclitus’ famous saying: ‘You cannot step in the same river twice.’ If we consider health as an outcome of regeneration, and regeneration is an adaptive process, we should lean into regeneration and adaptation when considering health. Research at the intersection of complex systems and health outcomes has shown that poor health states are closely linked with loss of adaptive responses within complex systems. We want to focus on the bigger conceptual picture and not get swept away in the details of thinking about the dynamics of complex systems. Extrapolating from lessons in regeneration biology to conceptions of health has many consequences, but we want to explore two, intertwined changes: how this adaptive view of regeneration alters our thinking about goals and outcomes of health, and what this means for our values related to health. If we’re not tied to the position that systems must return to a previous state, then we can explore alternatives. After all, if we don’t take the previous state of the system to act as our target outcome, we have to select what we want the outcome – or the range of possible acceptable outcomes – to be. If we want to intervene in a system, for instance chopping off the salamander’s limb, severing the lamprey’s synapses or manipulating planarian genes, our expectation cannot be fidelity to a pre-diseased or pre-injured state. Instead of focusing on replacement and repair towards reestablishing the state before illness, we can focus on what would be the ideal state for that goal. Is it necessary that the salamander regenerate a limb of the same size, or might it work just as well to have a smaller limb that takes less energy to regrow? Perhaps the new synapses that bypass the damaged site in the lamprey can be more efficient because they ‘know’ what path to follow? Or knocking out genes in planarians may allow other regulatory pathways to work and reveal connections among the genes, cells and resulting organism. The question of ‘what constitutes an ideal state’ is one of values Now let’s consider this focus on choosing outcomes within human health. Despite more than 3,000 years of fashioning prostheses to replace missing parts, today around 20 per cent of people outfitted with a limb prosthesis end up abandoning their device, often because it doesn’t feel or work ‘right’. There has been a great deal of research in improving functionality of prosthetic devices, less in improving comfort, and far less in understanding psychosocial factors that contribute to device abandonment. From our new perspective of health as an outcome of adaptive regeneration, we can begin to ask questions about the goals we want to achieve: do prosthetic devices need to mimic the form of the lost structure? If functionality is the primary concern of most prosthesis wearers, are there other ways to achieve functionality? And are prostheses always the best solution? While this is not the place to address the vast and richly informative literature on ableism and embracing different abilities, it is worth noting that thinking about regeneration as promoting a range of different approaches to the health of a complex adaptive system offers parallel challenges to the assumption that all organisms need to be the same to be considered whole and healthy. This brings us to the second shift in perspective about health: values. The question of ‘what constitutes an ideal state’ is one of values. After all, if health is possible within an array of outcomes, we have to select what we want the outcome to be. Whatever we select will be based on what we deem most important – that is, what we value. Hippocrates realised this when he emphasised health as a balance of elements within the whole system. If asked ‘rebalance for what?’ he would have said ‘health’. We can ask an updated version of his question concerning regeneration: regeneration for what? When intervening in a system to promote health, should we privilege enhanced function? If so, which functions do we want to optimise? Picturing health (and regeneration) through this lens of rebalancing the health of the whole has a bearing on the transhumanism movement, which asks similar questions. In both cases, we also need to recognise who is asking and answering the question: the patient? The clinician? The biomedical researcher? Each of these stakeholders may have a different sense of what health is. Being clear about our goals is a starting point, and reflecting on the terms we use is another major step toward opening our minds to different – and perhaps better – ways to think about health, the health of organisms including humans. We do not have the answers, but we have offered suggestions to think in terms of complex systems that are adaptive to change and therefore themselves change over time. Plus, we invite robust discussion about what our goals of health should be if we embrace the idea that it is not always best simply to go back to what we had before. Health is more complicated than that. Source of the article

GOATReads: Philosophy

Science + religion

The science-versus-religion opposition is a barrier to thought. Each one is a gift, rather than a threat, to the other To riff on the opening lines of Steven Shapin’s book The Scientific Revolution (1996), there is no such thing as a science-religion conflict, and this is an essay about it. It is not, however, another rebuttal of the ‘conflict narrative’ – there is already an abundance of good, recent writing in that vein from historians, sociologists and philosophers as well as scientists themselves. Readers still under the misapprehension that the history of science can be accurately characterised by a continuous struggle to escape from the shackles of religious oppression into a sunny secular upland of free thought (loudly expressed by a few scientists but no historians) can consult Peter Harrison’s masterly book The Territories of Science and Religion (2015), or dip into Ronald Numbers’s delightful edited volume Galileo Goes to Jail and Other Myths about Science and Religion (2009). Likewise, assumptions that theological and scientific methodologies and truth-claims are necessarily in philosophical or rational conflict might be challenged by Alister McGrath’s book The Territories of Human Reason (2019) or Andrew Torrance and Thomas McCall’s edited Knowing Creation (2018). The late-Victorian origin of the ‘alternative history’ of unavoidable conflict is fascinating in its own right, but also damaging in that it has multiplied through so much public and educational discourse in the 20th century in both secular and religious communities. That is the topic of a new and fascinating study by the historian James Ungureanu, Science, Religion, and the Protestant Tradition (2019). Finally, the concomitant assumption that scientists must, by logical force, adopt non-theistic worldviews is roundly rebutted by recent and global social science, such as Elaine Eklund’s major survey, also published in a new book, Secularity and Science (2019). All well and good – so the history, philosophy and sociology of science and religion are richer and more interesting than the media-tales and high-school stories of opposition we were all brought up on. It seems a good time to ask the ‘so what?’ questions, however, especially since there has been less work in that direction. If Islamic, Jewish and Christian theologies were demonstrably central in the construction of our current scientific methodologies, for example, then what might such a reassessment imply for fruitful development of the role that science plays in our modern world? In what ways might religious communities support science especially under the shadow of a ‘post-truth’ political order? What implications and resources might a rethink of science and religion offer for the anguished science-educational discussion on both sides of the Atlantic, and for the emerging international discussions on ‘science-literacy’? I want to explore here directions in which we could take those consequential questions. Three perspectives will suggest lines of new resources for thinking: the critical tools offered by the discipline of theology itself (even in an entirely secular context), a reappraisal of ancient and premodern texts, and a new way of looking at the unanswered questions and predicament of some postmodern philosophy and sociology. I’ll finish by suggesting how these in turn suggest new configurations of religious communities in regard to science and technology. The humble conjunction ‘and’ does much more work in framing discussions of ‘theology and science’ than at first apparent. It tacitly assumes that its referents belong to the same category (‘red’ and ‘blue’), implying a limited overlap between them (‘north’ and ‘south’), and it might already bias the discussion into oppositional mode (‘liberal’ and ‘conservative’). Yet both science and theology resist boundaries – each has something to say about everything. Other conjunctions are possible that do much greater justice to the history and philosophy of science, and also to the cultural narratives of theology. A strong candidate is ‘of’, when the appropriate question now becomes: ‘What is a theology of science?’ and its complement, ‘What is a science of theology?’ A ‘theology of…’ delivers a narrative of teleology, a story of purpose. A ‘theology of science’ will describe, within the religious narrative of one or more traditions, what the work of science is for. There have been examples of the ‘theology of…’ genre addressing, for example, music – see James Begbie’s Theology, Music and Time (2000) – and art – see Nicholas Wolterstorff’s Art in Action (1997). Note that working through a teleology of a cultural art by calling on theological resources does not imply a personal commitment to that theology – it might simply respond to a need for academic thinking about purpose. For example, Begbie explores the role that music plays in accommodating human experience to time, while Wolterstorff discovers a responsibility toward the visual aesthetics of public spaces. In both cases, we find that theology has retained a set of critical tools that address the essential human experience of purpose, value and ethics in regard to a capacity or endeavour. Intriguingly, it appears that some of the social frustrations that science now experiences result from missing, inadequate or even damaging cultural narratives of science. Absence of a narrative that delineates what science is for leave it open to hijacking by personal or corporate sectarian interests alone, such as the purely economic framings of much government policy. It also muddies educational waters, resulting in an over-instrumental approach to science formation. I have elsewhere attempted to tease out a longer argument for what a ‘theology of science’ might look like, but even a summary must begin with examples of the fresh (though ancient) sources that a late-modern theological project of this kind requires. A ‘de-centralising’ text places humans at the periphery of the world, looking on in wonder and terror at the ‘other’ The cue for a first wellspring of raw material comes from the neo-Kantian Berlin philosopher Susan Neiman. In a remarkable essay, she urges that Western philosophy acknowledge, for a number of reasons, a second foundational source alongside Plato – that of the Biblical Book of Job. The ancient Semitic text offers a matchless starting point for a narratology of the human relationship of the mind, and the experience of human suffering, with the material world. Long recognised as a masterpiece of ancient literature, Job has attracted and perplexed scholars in equal measures for centuries, and is still a vibrant field of study. David Clines, a leading and lifelong scholar of the text, calls Job ‘the most intense book theologically and intellectually of the Old Testament’. Inspiring commentators across vistas of centuries and philosophies, from Basil the Great to Emmanuel Levinas, its relevance to a theology of science is immediately apparent from the poetic ‘Lord’s Answer’ to Job’s complaints late in the book: Where were you when I founded the earth? Tell me, if you have insight. Who fixed its dimensions? Surely you know! … Have you entered the storehouses of the snow? Or have you seen the arsenals of the hail? The writer develops material from the core creation narrative in Hebrew wisdom poetry – as found in Psalms, Proverbs and Prophets – that speaks of creation through ‘ordering’, as well as bounding and setting foundations. The questing survey next sweeps over the animal kingdom, then finishes with a celebrated ‘de-centralising’ text that places humans at the periphery of the world, looking on in wonder and terror at the ‘other’ – the great beasts Behemoth and Leviathan. The text is an ancient recognition of the unpredictable aspects of the world: the whirlwind, the earthquake, the flood, unknown great beasts. In today’s terms, we have in the Lord’s Answer to Job a foundational framing for the primary questions of the fields we now call cosmology, geology, meteorology, astronomy, zoology… We recognise an ancient and questioning view into nature unsurpassed in its astute attention to detail and sensibility towards the tensions of humanity in confrontation with materiality. The call to a questioning relationship of the mind from this ancient and enigmatic source feeds questions of purpose in the human engagement with nature from a cultural depth that a restriction to contemporary discourse does not touch. Drawing on historical sources is helpful in another way. The philosophy of every age contains its tacit assumptions, taken as evident so not critically examined. A project on the human purpose for science that draws on theological thinking might, in this light, draw on writing from periods when this was an academically developed topic, such as the scientific renaissances of the 13th and 17th centuries. Both saw considerable scientific progress (such as, respectively, the development of geometric optics to explain the rainbow phenomenon, and the establishment of heliocentricity). Furthermore, both periods, while perfectly distinguishing ‘natural philosophy’ from theology, worked in an intellectual atmosphere that encouraged a fluidity of thought between them. An instructive and insightful thinker from the first is the polymath Robert Grosseteste. Master to the Oxford Franciscans in the 1220s, and Bishop of Lincoln from 1235 to his death in 1253, Grosseteste wrote in highly mathematical ways about light, colour, sound and the heavens. He drew on the earlier Arab transmission of and commentaries on Aristotle, yet developed many topics well beyond the legacy of the ancient philosopher (he was the first, for example, to identify the phenomenon of refraction to be responsible for rainbows). He also brought a developed Christian philosophy to bear upon the reawakening of natural philosophy in Europe, whose programmes of astronomy, mechanics and above all optics would lead to early modern science. In his Commentary on the Posterior Analytics (Aristotle’s most detailed exposition of his scientific method), Grosseteste places a sophisticated theological philosophy of science within an overarching Christian narrative of Creation, Fall and Redemption. Employing an ancient metaphor for the effect of the Fall on the higher intellectual powers as a ‘lulling to sleep’, he maintains that the lower faculties, including critically the senses, are less affected by fallen human nature than the higher. So, re-illumination must start there: Since sense perception, the weakest of all human powers, apprehending only corruptible individual things, survives, imagination stands, memory stands, and finally understanding, which is the noblest of human powers capable of apprehending the incorruptible, universal, first essences, stands! Human re-engagement with the external world through the senses, recovering a potential knowledge of it, becomes a participation in the theological project of healing. Furthermore, the reason that this is possible is because this relationship with the created world is also the nexus at which human seeking is met by divine illumination. The rise of experimentation in science as we now know it is itself a counterintuitive turn The old idea that there is something incomplete, damaged or ‘out of joint’ in the human relationship with materiality (itself drawing on traditions such as Job), and that the human ability to engage a question-based and rational investigation of the physical world constitutes a step towards a reversal of it, represents a strand of continuity between medieval and early modern thinking. Francis Bacon’s theologically motivated framing of the new ‘experimental philosophy’ in the 17th century takes (though not explicitly) Grosseteste’s framing as its starting point. As framed in his Novum Organum, the Biblical and medieval tradition that sense data are more reliable than those from reason or imagination constitutes his foundation for the ‘experimental method’. The rise of experimentation in science as we now know it is itself a counterintuitive turn, in spite the hindsight-fuelled criticism of ancient, renaissance and medieval natural philosophers for their failure to adopt it. Yet the notion that one could learn anything general about the workings of nature by acts as specific and as artificial as those constituting an experiment was not at all evident, even after the foundation of the Royal Society. The 17th-century philosopher Margaret Cavendish was among the clearest of critics in her Observations upon Experimental Philosophy (1668): For as much as a natural man differs from an artificial statue or picture of a man, so much differs a natural effect from an artificial… Paradoxically perhaps, it was the theologically informed imagination of the medieval and early modern teleology of science that motivated the counterintuitive step that won against Cavendish’s critique. Much of ‘postmodern’ philosophical thinking and its antecedents through the 20th century appear at best to have no contact with science at all, and at worst to strike at the very root-assumptions on which natural science is built, such as the existence of a real world, and the human ability to speak representationally of it. The occasional explicit skirmishes in the 1990s ‘science wars’ between philosophers and scientists (such as the ‘Sokal-affair’ and the subsequent public acrimony between the physicist Alan Sokal and the philosopher Jacques Derrida) have suggested an irreconcilable conflict. A superficial evaluation might conclude that the charges of ‘intellectual imposture’ and ‘uncritical naivety’ levied from either side are simply the millennial manifestation of the earlier ‘two cultures’ conflict of F R Leavis and C P Snow, between the late-modern divided intellectual world of the sciences and the humanities. Yet in light of the long and theologically informed perspective on the story that we have sketched, the relationship of science to the major postmodern philosophical themes looks rather different. Søren Kierkegaard and Albert Camus wrote of the ‘absurd’ – a gulf between the human quest for meaning and its absence in the world. Levinas and Jean-Paul Sartre wrote of the ‘nausea’ that arises from a human confrontation with sheer, basic existence. Derrida and Ferdinand de Saussure framed the human predicament of desire to represent the unrepresentable as différance. Hannah Arendt introduces The Human Condition (1958) with a meditation on the iconic value of human spaceflight, and concludes that the history of modernism has been a turning away from the world that has increased its inhospitality, so that we are suffering from ‘world alienation’. The first modern articulation of what these thinkers have in common, an irreconcilable aspect of the human condition in respect of the world, comes from Immanuel Kant’s Critique of Judgment (1790): Between the realm of the natural concept, as the sensible, and the realm of the concept of freedom, as the supersensible, there is a great gulf fixed, so that it is not possible to pass from the former to the latter by means of the theoretical employment of reason. Kant’s recognition that more than reason alone is required for human re-engagement with the world is echoed by George Steiner. Real Presences (1989), his short but plangent lament over late-modern literary disengagement with reference and meaning, looks from predicament to possible solution: Only art can go some way towards making accessible, towards waking into some measure of communicability, the sheer inhuman otherness of matter… Steiner’s relational language is full of religious resonance – for re-ligio is simply at source the re-connection of the broken. Yet, once we are prepared to situate science within the same relationship to the humanities as enjoyed by the arts, then it also fits rather snugly into a framing of ‘making accessible the sheer inhuman otherness of matter’. What else, on reflection, does science do? The modernist hope of controlling nature through technology is dashed on the rocks Although both theology and philosophy suffer frequent accusations of irrelevance, on this point of brokenness and confusion in the relationship of humans to the world, current public debate on crucial science and technology indicate that both strands of thought are on the mark. Climate change, vaccination, artificial intelligence – these and other topics are marked in the quality of public and political discourse by anything but enlightenment values. The philosopher Jean-Pierre Dupuy, commenting in 2010 on a Europe-wide project using narrative analysis of public debates around nanotechnology, shows that they draw instead on both ancient and modern ‘narratives of despair’, creating an undertow to any discussion of ‘troubled technologies’ that, if unrecognised, renders effective public consultation impossible. The research team labelled the narratives: (1) Be careful what you wish for – the narrative of desire; (2) Pandora’s Box – the narrative of evil and hope; (3) Messing with nature – the narrative of the sacred; (4) Kept in the dark – the narrative of alienation; and (5) The rich get richer and the poor get poorer – the narrative of exploitation. These dark and alienated stories turn up again and again below the surface of public framings of science, yet driving opinion and policy. The continuously complex case of genetically modified organisms is another example. None of these underlying and framing stories draws on the theological resources within the history of science itself, but all do illustrate the absurd, the alienation and the irreconcilable of postmodern thinking. Small wonder, perhaps, that Bruno Latour, writing in 2007 on environmentalism, revisits the narrative of Pandora’s Box, showing that the modernist hope of controlling nature through technology is dashed on the rocks of the same increasingly deep and problematic entangling with the world that prevents our withdrawal from it. But Latour then makes a surprising move: he calls for a re-examination of the connection between mastery, technology and theology as a route out of the environmental impasse. What forms would an answer to Latour’s call take? One is simply the strong yet gentle repeating of truth to power that a confessional voice for science, and evidence-based thinking, can have when it is resting on deep foundations of a theology that understands science as a gift rather than a threat. One reason that Katharine Hayhoe, the Texan climate scientist, is such a powerful advocate in the United States for taking climate change seriously is that she is able to explicitly work through a theological argument for environmental care with those who resonate with that, but whose ideological commitments are impervious to secular voices. There are more grassroots-level examples that demonstrate how religious communities can support a healthy lay engagement with science. Local movements can dissolve some of the alienation and fear that characterises science for many people. In 2010, a group of local churches in Leeds in the UK, decided to hold a community science festival that encouraged people to share their own and their families’ stories, together with the objects that went with them (from an ancient telescope to a circuit board from an early colour TV set that was constructed by a resident’s grandfather). A diverse movement under the general title ‘Equipping Christian Leadership in an Age of Science’ in the UK has discovered a natural empathy for science as a creative gift, rather than a threat to belief, within local churches. At a national level, the past five years have seen a remarkable project engaging senior church leaders in the UK with current scientific issues and their researchers. In a country with an established Church, it is essential that its voices in the national political process are scientifically informed and connected. Workshop participants, including scientists with no religious background or practice, have found the combination of science, theology and community leadership to be uniquely powerful in resourcing discussions of ethical ways forward, in issues from fracking to artificial intelligence. A relational narrative for science that speaks to the need to reconcile the human with the material, and that draws on ancient wisdom, contributes to the construction of new pathways to a healthier public discourse, and an interdisciplinary educational project that is faithful to the story of human engagement with the apparently chaotic, inhuman materiality of nature, yet one whose future must be negotiated alongside our own. Without new thinking on ‘science and religion’, we risk forfeiting an essential source for wisdom today. Source of the article

In the glow of the candle

In a dark library lit by a single lamp, four men, a young woman and three children crowd around a circular dais. They are staring at a clockwork contraption called an orrery, housed within giant bands of metal that suggest a celestial sphere. Below, tiny planets rotate around the Sun, orbited by pearl moons. Concentric plates allow the planets to move according to their relative speeds. The lecturer in his striking red gown is pointing to Jupiter’s moons, while a younger man in a purple coat and gold striped waistcoat assiduously takes notes. His notetaking implies the event isn’t run-of-the-mill, but something special and worth recording. A small lamp has taken the place of the central sun in the orrery and throws light upon everyone’s faces. We can only see it as a reflection below the elbow of the silhouetted youth in the foreground – a wick burning in a jar of oil. The lamplight adds an eager gleam to the eyes of the inquisitive young children and illuminates the contemplative gaze of the young man on the right. It highlights the edges of the young woman’s frilled bonnet and the cheekbones of the adolescent who leans over the edge of the orrery in front of us. It is his shoulder that we can look over without feeling as if we are intruding. The lamp illuminates all our faces as our minds are enlightened by the science we observe in action. Today, A Philosopher Giving that Lecture on the Orrery, in which a Lamp is Put in the Place of the Sun (1766) by Joseph Wright of Derby (1734-97) is rightly considered a masterpiece. When it was first exhibited at the annual exhibition of the Society of Artists in Spring Gardens, London, in 1766, reviewers singled it out for particular praise, saying it was ‘exceeding fine’. It attracted more attention than any other work on display, inspiring one reviewer to break out in rhyming couplets: ‘Without a rival let this “Wright” be known,/For this amazing province is his own.’ Wright’s Orrery was a huge statement from a young and ambitious artist. So why, then, was he overlooked when the Royal Academy of Arts was founded by George III just two years later? When Wright had been lauded as a ‘genius’ at that same year’s Society of Artists exhibition? Why was he not a founder member of the new august institution? The Royal Academy was founded in December 1768 by George III at the behest of a small number of artists and architects. Membership was limited to 40, meaning more than 160 members of the Society of Artists did not make the cut. The Society had formed only eight years earlier, in 1760, as a way for Britain’s leading artists to meet, converse, study and exhibit together in London. It’s hard to imagine but there were no regular public exhibitions of art before this time. Art was shown to discerning patrons in artists’ studios and viewed in private collections in aristocratic homes. The Society changed this and, for a shilling (around £5 today), anyone could scrutinise the finest paintings of the year. At its peak, it had more than 200 members, including the country’s leading landscape painter Richard Wilson, the ‘grand manner’ portraitist Joshua Reynolds and the architect William Chambers, who would later design the Great Room of the Royal Academy. It also included younger artists such as the American painter Benjamin West and the history painter Nathaniel Dance, as well as Wright himself and his friends John Hamilton Mortimer and Ozias Humphry. These young men were finding their feet by exhibiting in this new annual mixed exhibition. Wright was 30 when he was elected a member of the Society in May 1765, the year his first ‘candlelight’ painting, Three Persons Viewing the Gladiator by Candlelight (1765), was exhibited to favourable reviews. While he was known as a portraitist in his home town of Derby, where he lived and worked, it was his ambitious candlelight paintings that significantly raised his profile in the capital. Wright’s friends in London were young and opinionated artists, who were members of the Howdalian Society, but his circle in Derby was older and more scientific. John Whitehurst, a horologist and geologist, lived at 22 Iron Gate, a few doors away from Wright at number 28, while Peter Perez Burdett, a cartographer and son of an instrument maker, lived in Full Street. Burdett was so familiar with Wright that he often borrowed money from him (and never seemed to pay him back). Many of Whitehurst’s friends – including the doctor Erasmus Darwin (grandfather of Charles Darwin) and the potter Josiah Wedgwood – were associated with the Lunar Society, a group of industrialists and natural philosophers who met regularly in Birmingham. The Lunar Society gathered on the Monday closest to the full moon. Their choice of date is often interpreted as travel related, the full moon offering the best light for journeys home after lengthy meetings. But these inquisitive men (and they were all men) chose the name of their society with care. Most were members of the Royal Society, the prestigious scientific organisation founded in London in 1660, and were fascinated by astronomy, engineering and the latest developments in chemistry and physics. The Moon was linked to Earth’s tides, and the race was on to solve the problem of measuring longitude at sea (not least because there was a £20,000 prize on offer for the first person to do this accurately). Meanwhile, the orbits of Jupiter’s moons were already used to measure longitude on land. Wright would allude to his allegiance to the Lunar Society from 1768 onwards by painting a full moon visible through windows and open doors in many of his paintings, including An Experiment on a Bird in the Air Pump (1768), The Blacksmith’s Shop (1771) and The Alchymist (1771). Wright drew upon the Lunar Society’s investigations for his candlelight paintings of scientific lectures and experiments. The Orrery is the most masterful of these (matched only by An Experiment on a Bird in the Air Pump). It is a resolutely modern painting, more than 2 metres wide, on a scale that competed with classical history paintings; one that drew upon the current fascination for scientific learning, and predated West’s revolutionary modern history paintings. Although it was rooted in Wright’s portrait practice, the Orrery was much more besides, because it was simultaneously a manifestation of the sublime in nature, of the insignificance of Man when contemplating the Universe. Lighting his painting from within also showed a deft understanding of art-historical antecedents. Caravaggio’s theatrical compositions often highlighted faces with a shaft of light, leaving other figures in darkness. His international followers were known as the Caravaggisti, and it is the work of the 17th-century Dutch artist Gerard van Honthorst that inspired Wright to use a centralised light source such as a lamp or candle. Nothing quite like the Orrery had been seen before in England. Wright’s training in London and subsequent practice as a portrait painter in Derby ensured the faces of those clustered around the planetary machine were credible and nuanced, and he used friends and collectors as models. The painting quickly sold to Washington Shirley, 5th Earl Ferrers, for the impressive sum of £210 (£22,000 today). This was up to eight times the amount he received for his portraits, a price that reflected the painting’s ambitious narrative content and large scale. Wright’s friend Burdett appears in the Orrery, his tricorne hat under his arm as he ardently takes notes. His own interests were, in fact, not astronomical but terrestrial – he won a £100 prize for his accurate survey of Derbyshire. The lecturer discussing the planets was possibly based on Whitehurst, who was researching the formation of Earth at the time, while Ferrers, Burdett’s patron and the purchaser of the Orrery, stands on the far right, admiring his protégé’s diligent notetaking. Ferrers may, in fact, have commissioned this painting – his nephew, Lawrence Rowland Ferrers, is the young boy contemplating Saturn. Ferrers owned an orrery, but Wright may have first seen one in action at public lectures given in Derby by James Ferguson (a friend of Whitehurst’s) in 1762. It was designed as an astronomical teaching tool for naval academies, with a brass ball representing the Sun in the centre. In some models, the brass ball representing the Sun could be replaced by a lamp to allow the fundamentals of eclipses to be demonstrated. Ferrers, a former naval officer, had been elected to the Royal Society in 1761 following his observations of the transit of Venus, and the ability of Wright’s orrery to show eclipses (of great interest to Ferrers) suggests that Wright chose both the equipment and his sitters with great care. We can see Wright was well connected in London and Derby, and reviews show that he was well respected, both for his portraiture and his new candlelight paintings. So why was he overlooked by the founders of the Royal Academy? Was it his depiction of science that didn’t chime with their aspirations, or were there other factors at play? Many of the artists who were founder members of the Royal Academy had formerly been directors of the Society of Artists, including West, Dance and Wilson. They had resigned en masse after a group of young members, including Wright’s friend Mortimer, demanded reform. These young members were sick of the monopoly the old guard held on the director posts, and voted to change the election procedure, wanting those who ran the Society to be the artists who were deemed best on merit, not on seniority or the number of artists they could cajole or even bribe to vote for them. So, from its very inception, the Royal Academy was a partisan place, founded in retaliation to changes forced through at the Society. Those directors who resigned were appointed to executive posts at the new, rival Academy, and a line in the sand was drawn – to be an Academician, you had to withdraw from all other British art societies. There were grand appointments of overseas artists, including the Swiss painter Angelica Kauffman (one of only two women on the initial roster), and Reynolds was persuaded to become the first president. But the founder member places were largely taken up by second-rate painters and jobbing artists – Reynolds’s loyal drapery painter Peter Toms, for example. Samuel Wale, another member, was described by a contemporary as ‘not one of the first artists of the age’. By contrast, none of the agitators for change at the Society of Artists were offered membership, not even Wright. Wright had never stood against the old guard, but his outspoken friends in the Howdalian Society ensured the entire group and its associates were barred from joining the Academy. If the reason for Wright’s exclusion was largely political, there may have been other factors at play too. The Academy was fashioned on academies on the Continent, in Florence, Milan, Paris and Rome. They followed a classical model, championing ancient history paintings, and religious and mythological scenes over portraits, landscapes and still lifes. The latter were deemed lower down the pecking order when it came to being assigned the best spots in annual exhibitions. There was no provision at all for artists who painted contemporary scenes that aspired to rise above genre painting and domestic conversation pieces. West cannily waited until he became an Academician before he switched from ancient to contemporary historic scenes and, even then, he was seen as a radical. Wright, by comparison, was painting fashionable science lectures that grappled with giant subjects such as our place in the Universe, with life and death from a scientific perspective. There was no pigeonhole for him; no clear place in the hierarchy. His focus on science might well have seemed threatening to traditional religious painting, where only God could play dice. Wright had made mechanical contraptions as a boy and, in later years, created his own version of the camera obscura to study light and shadow. He painted solar systems and chemical reactions, and experimented with unusual light sources. Two fellow artists who similarly pursued scientific routes were also excluded from the Academy: the equestrian painter George Stubbs and the botanical collagist Mary Delany. Stubbs had spent 18 months holed up in a barn in Lincolnshire, in the mid-1750s, dissecting an embalmed horse to better understand its anatomical structure. (His painting of the racehorse Whistlejacket is now one of the most iconic works in the National Gallery in London.) Delany painstakingly reproduced the botanical likenesses of nearly a thousand flowers, creating intricate collages that the naturalist Joseph Banks claimed were the only ones he would trust to ‘describe botanically any plant without the least fear of committing an error’. In contrast to the classical idealisation of nature advocated by the Academy, Wright, Stubbs and Delany studied nature for themselves, using the latest scientific discoveries to further their work. Wright’s choice of subject matter was not only contemporary, but bordered on the heretical. In his candlelight paintings of the orrery, the air pump and the alchemist at work, he not only employed dramatic lighting and plunging shadows to heighten the drama, but the scenes themselves dealt in mortality and the insignificance of man in relation to the natural world, as well as suggesting that the scientist was now usurping the divine creator. In the Orrery, a mixed assembly of people, representing the various ages of man, contemplate the solar system from above. Earth is a tiny sphere, barely visible on the right side of the clockwork machine, orbited by one desultory moon. When considered next to the larger spheres of Jupiter and Saturn, with their generous dusting of orbiting bodies, it looks small and insignificant. Man is inconsequential from this perspective, invisible from space. Immanuel Kant, in his Universal Natural History and Theory of the Heavens (1755) compared the ‘infinite multitude of worlds and systems which fill the extension of the Milky Way’ to ‘the Earth, as a grain of sand, [that] is scarcely perceived’. The Anglo-Irish philosopher Edmund Burke concluded that the starry heaven ‘never fails to excite an idea of grandeur’. Burke explored the grandeur of the Universe in his treatise A Philosophical Enquiry into our Ideas of the Sublime and the Beautiful (1757) and, long before Caspar David Friedrich and J M W Turner popularised the sublime in art, Wright explored its visual power. The sublime is the sensation we feel when looking at a thundering waterfall or sheer cliff face, or when we contemplate the Universe – a fear, an awe so overwhelming that it affects us in a bodily way. For Burke, not only could natural phenomena on Earth be sublime, but so could the properties of light and dark. He wrote: Mere light is too common a thing to make a strong impression on the mind, and without a strong impression nothing can be sublime. But such a light as that of the sun, immediately exerted on the eye, as it overpowers the sense, is a very great idea. Burke’s theory may have informed viewers who contemplated the rotating brass orreries, which taught navigation by day and the sublime contemplation of Man’s insignificance by night. Wright intensifies the viewing experience in his painting, using a small central lamp to transform the library into a sea of shadows, a dark nothingness in which the brass wheels turn and planets rotate. The scale of the painting and the silhouetted figure over whose shoulder we can peer both contribute to the sensation that we are experiencing this phenomenon for ourselves. In the Orrery, Wright paints a manmade model of the solar system. Increasingly, in his candlelight paintings, he shows the mastery of mankind over nature. In the Air Pump, the lecturer places a live cockatoo – an exotic pet – in a glass sphere, and removes the air to show that nothing can live in a vacuum. Travelling lecturers in fact used something called a ‘lungs-glass’ that replicated this without the need for animal sacrifice. That would be too cruel, ‘too shocking’, acknowledged Ferguson. Wright goes for the dramatic approach, forcing the young girls to hug each other for support as the older one turns away in horror. This isn’t the natural world, but man’s domination of it – nature is imprisoned in the glass and banished beyond the windowpane, where a young boy fiddles with the blind, about to shut out the natural light source of the Moon (full, of course) to maximise the drama. The cockatoo resembles the white dove of the Holy Spirit, and the lecturer, by extension, appears to play God. In 1773, Wright journeyed to Rome, where he stayed for two years. He bathed in its warm, diffused light, and sketched classical ruins akin to those that had formed the setting for The Blacksmith’s Shop. He never returned to his candlelight scenes, to scientific or industrial subjects, preferring to paint figures in Italianate landscapes and portraits in his studios in Bath and Derby. Wright’s early patron, Ferrers, died in 1778, and the 6th earl (Ferrers’s younger brother) did not care for the Orrery, despite his own son Lawrence Rowland featuring in it. The Orrery was offered for sale at Christie and Ansell, but didn’t attract enough interest to sell, and was bought in by the auction house for £84, a fraction of the original sale price. Subsequent sales saw it changing hands for just £50 before it was bought by public subscription in 1884 for the newly opened Derby Museum and Art Gallery. This was to be a turning point in the fortunes of the Orrery. Today it is greatly admired (alongside the Air Pump), and is Wright’s most celebrated painting. While the paintings of the founder Academicians Toms and Wale have been lost to history, Wright has a suite of galleries dedicated to his work at the Derby Museum and Art Gallery, and is regularly the subject of major exhibitions. So, while the Orrery may not have led to early success at the Royal Academy, it ensured that Wright is now seen as one of the 18th century’s most assured and celebrated British artists. Source of the article

GOATReads:Sociology

The Art We Do Together: “Art Worlds” 40th Anniversary

One way to think about intellectual life is as a musical composition where each new book adds to the chorus by bringing in the rhythms, tonalities, and hooks that give shape to the overall melody. Every now and then, however, a book comes that changes the tune altogether. Howard Becker’s Art Worlds, which now celebrates its 40th birthday, is one of those books. It is hardly an overstatement to say that the publication of Art Worlds in 1982 changed forever how sociologists study art. Art Worlds created a seismic change. It demonstrated that the sociological study of art need not be engulfed in trying to solve highfalutin aesthetic questions (e.g., What is art? How do we distinguish it from non-art? What is an author?) and could instead focus on studying the collective practices through which artworks are realized. Art Worlds offered a sharp contrast to the scholarship that had dominated the study of art up to that point. The two decades before its publication had been characterized by an all-out assault on the central ideas of modern aesthetics. In France, for example, the idea of the author had been demolished by poststructuralist authors like Roland Barthes and Michel Foucault (there’s an irony somewhere there). Meanwhile, in Italy, Umberto Eco celebrated the iconoclastic emergence of popular culture and how it upended the old hierarchies that thinkers of an earlier generation, most notably Theodor Adorno, had sought to defend. In England, Stuart Hall and Raymond Williams, along with the rest of the Birmingham School, laid bare the inner workings of cultural hegemony, while John Berger introduced British public television’s mass audiences to a new way of seeing art as an apparatus of symbolic domination. Even the heavily fortified citadels of art history and aesthetics were not immune; schools of thought such as the institutional theory of art, feminist art history, and Marxist aesthetics mounted internal rebellions against long-held ideals about the purity and universality of art. Meanwhile, art itself was going through similar convulsions, with movements like pop art, land art, performance art, Fluxus, feminist art, and institutional criticism, not to mention the myriad of art collectives in Latin American and Eastern Europe, defying the modern canon and institutions that had defined art up to that point. Sociology, Becker’s intellectual home, was “all in” in the mutiny against modern aesthetics. The 1960s and ’70s were a time in which trenchant critique, bordering on philistinism, dominated the sociology of art. Text after text adopted what we might call a Scooby-Doo research model: taking a seemingly good character (art) and proceeding to unmask it as the bad guy (ideology). Thus, while authors such as Arnold Hauser and Pierre Francastel sought to expose seemingly inert, formal elements of artworks as projections propelled by “real social forces,” others like Raymonde Moulin revealed that art was not a pure and autonomous field of activity but an activity guided by market forces. Still others, like Pierre Bourdieu, revealed that love for art was little more than a bourgeois conceit for social reproduction. In a time in which doing a sociology of art seemed to require deploying sociology against art, the genius of Art Worlds was radically simple: it just studied art as something that people do together. In so doing, Becker took a potentially controversial idea—that art is a form of collective action—and presented it in a disarmingly common-sense way. If we study how art is produced, Becker argued, we soon realize that this process is rarely, if ever, an individual one. Artists always depend on others to obtain materials to produce their works, as well as to exhibit, play, publish, and distribute them. Art, it follows, is a process that requires collaboration and coordination among different people. In this sense, it is no different from any other social activity, which means that we need to study it as we do any other type of social process: by focusing on what people do. This meant studying not only artists but also critics, curators, editors, art materials suppliers, administrators, and audiences, to name just a few, along with the standards, conventions, and technologies that allowed them to coordinate their actions and produce an artwork. Becker opened an entirely different empirical research program in the sociological study of art, one that moved the attention from ontological and epistemological questions that had dominated traditional aesthetics, such as “What is art?” and “How do we know and experience art?”, to the pragmatic question “How is art done?” Thus, facing a painting like, say, Picasso’s Guernica, Becker invited us not simply to focus on decoding its symbolism and formal composition or on trying to decipher Picasso’s artistic intent and reveal its underlying meaning, but to ask ourselves how such an artwork could be done. This approach employed a new arsenal of empirical queries, such as: What were the networks of collaboration and cooperation that helped Picasso paint this work? What materials did he use, where did he get them, who provided them? What conventions did he follow (or break)? What institutions supported him? In short, what kind of “world” and collective effort had to be in place so that Picasso could create and display this masterpiece? As I was preparing this short essay, I was curious to see what contemporary readers make of this now 40-year-old book. So I indulged in that most peculiar ritual of our age: reading online reviews. The overwhelming majority were positive, effusively praising the clarity and richness of Becker’s descriptions. The negative were almost unanimous in their criticism: Art Worlds lacks any “real” theory and is filled with trivially obvious observations about how the art world works. Such criticisms did not surprise me, as they mirror those I have heard leveled in graduate seminars over the years. Students in search of a “theory fix” typically fault Art Worlds for being a perfect example of why sociology has a bad press as a “science of the obvious.” These types of critiques forget that the obvious is often what is most easily missed—and dismissed. If Art Worlds was and remains important, it is precisely because it reminds us of the obvious: that art is a collective practice. Somehow, this seemingly platitudinous observation had been missing from most art analysis, thereby reducing art history to a narrative about individuals and their heroic feats of creativity. By inviting us to remember that art is always a form of collective action, Art Worlds widened our attention to include all those agents, practices, and technologies that had typically remained invisible and barely made it into the hegemonic narratives of art, but without which art would be simply impossible. In so doing, Art Worlds reminded those studying art to be humble in their descriptions and pay attention to the perfectly banal, yet crucial facts that compose the social worlds we inhabit. Photography needs film, digital files, and cameras; consequently, you really cannot understand the transformation of photography without understanding the corporations that produce photographic material that shapes what artists can and cannot do. Sometimes the absence of an artist’s work in a museum is not necessarily for ideological or aesthetic reasons, but simply because the artworks were “too big to go through the museum’s doors and too heavy for its floors to support.” But unlike the mad king in Borges’s “On Exactitude in Science” story, who sought to create a map that was a perfect representation of his empire, Becker offered in Art Worlds an incomplete map. This inconclusiveness was not a bug but a carefully designed feature of the sociological tradition he had inherited from his mentor, Everett Hughes. The tradition was firmly anchored in the belief that any attempt at providing a definitive account, let alone a conclusive theoretical model, of any social world is fated to fail because these worlds are continually changing. This is why Art Worlds, to the desperation of some, does not offer any theoretical model. It is also why Art Worlds contains no pretensions to having provided a final account of what art worlds are or how they work. Instead, the book is constructed around carefully curated, open-ended, and inconclusive lists. Paragraph after paragraph, we are told that “sometimes” artists do this, while “other times” they do that, and yet “other times” they do something else. The result is a book that reads not as a closed treatise or model but as a compendium of researchable empirical questions that invite the reader to continue exploring them. If there is something that defines Art Worlds, it is this dogmatic antidogmatism—a complete refusal to have the last word. This antidogmatism and open-endedness are precisely what make Art Worlds a fresh and necessary read even today. Unlike classics now sunk by the weight of their theoretical models, Art Worlds still reads as an object lesson for anyone writing in academesque for a living. At a moment in which oppositional and antagonistic writing seems to dominate the conversation, Art Worlds never tries to convince, demonstrate, or conclude, just to invite us to a conversation. The book does not require anything from the reader, such as prior knowledge of controversies in some subfield or being well-versed in concepts, theories, or debates. Art Worlds offers a leveled playing field on which the author never imposes himself upon the reader, because there is no battle to be won, just a conversation to be had. This is, for me, the indelible value of this book as a perennial reminder that writing can take the form of an open-ended invitation to think together. Source of the article