Verify it's really you

Please re-enter your password to continue with this action.

Posts

GOATReads:Sociology

Is inherited wealth bad?

Despite associations with the idle rich, the fact that inheritances are rising is a sign of a healthy, growing economy Scan the headlines and you might think that Western economies are on the verge of an ‘inheritance explosion’. Popular narratives warn of a looming ‘great wealth transfer’ as baby boomers pass down trillions, and some commentators fret about a new era of ‘inheritocracy’ in which an idle rich class dominates by virtue of birthright. The storyline is alarming: booming bequests funnel ever more unearned riches to heirs, widening inequality and sapping economic dynamism. This notion fits a broader anxiety of our age, that capitalism is hardening into a hereditary hierarchy, undermining the meritocratic ideal. Yes, inheritance values are rising across the Western world. But this does not pose an existential threat to the economy, nor is it necessarily a drag on growth. Far from being a feudal relic that cements a permanent aristocracy, inherited wealth has changed in character and scale over time. Its relationship with growth and inequality is more complex than many assume. For most people, inherited family wealth consists of a parents’ home or long-term savings. Occasionally, it is a family enterprise, and when such firms survive not just a few years but several decades, they reflect a form of entrepreneurship that thinks in generations rather than quarters. Recent research on inequality also suggests that inheritance can, in fact, reduce wealth gaps, as bequests tend to matter far more for less wealthy heirs. Taxing inheritances may seem like a neat solution to curb inequality, but in practice inheritance taxes have often proven inefficient and inequitable. As a result, many countries that once relied on them have quietly abandoned these taxes in favour of more effective capital income taxes targeting profits, dividends and realised gains, rather than wealth stocks and bequests. In this essay, we take a reflective journey through the latest evidence and historical trends on inherited wealth. We explore how the role of inheritance has evolved over the past century, how bequests affect the distribution of wealth within and across generations, and whether inheritance taxation has a constructive role to play in a modern tax system. A nuanced picture emerges. While inheritance is not without challenges, it often fosters long-term investment and continuity, and attempts to heavily tax or curtail it have frequently backfired. Perhaps instead of fixating on what is passed down, we should focus on expanding who gets to build and eventually inherit wealth, through policies that spur growth, entrepreneurship and broad-based opportunity. The dominant description of wealth in the late neoliberal era is one of burgeoning dynastic capitalism. In this view, the postwar period of relative equality has given way to a resurgence of inherited wealth. Economists have documented a rise in aggregate inheritance flows, that is, the total value of bequests and gifts each year, relative to national income. In countries such as France and the United Kingdom, inheritance flows that were modest in the mid-20th century have climbed back toward levels last seen in the early 1900s. My own historical data for Sweden show a similar pattern. To critics, this trend signals a return to an era when economic rank was literally inherited, an age of rentier elites and ossified social mobility. Thomas Piketty, who has generated many of these long-run data series, warns of a revival of ‘patrimonial capitalism’, a society in which inherited fortunes overshadow self-made wealth. The term ‘inheritocracy’, recently highlighted in The Economist, neatly captures the fear of a society governed by heirs rather than merit. Several macroeconomic forces lie behind these trends. Western societies have grown both older and richer. Longer lifespans and higher accumulated wealth mean that older generations are bequeathing larger sums than their parents did. At the same time, wealth-to-income ratios in advanced economies have risen substantially alongside slower income growth. When total wealth swells, through rising stock markets, housing values and pension assets, even a constant propensity to leave bequests translates into larger inheritances relative to GDP. In this sense, part of the perceived ‘inheritance boom’ is simply a byproduct of prosperity. This nuance, however, is often lost in alarming headlines declaring that trillions will soon be ‘passed on’ to heirs. Critics argue that rising inherited wealth undermines both fairness and efficiency. The fairness concern is straightforward: large inheritances confer advantages on people who did nothing to earn them, widening the gap between those born into affluence and those born to modest means. The efficiency concern is that capital in the hands of heirs, potentially less talented or motivated than the original wealth creators, could slow productivity. These worries are not new. A century ago, thinkers from Andrew Carnegie to European social democrats warned against concentrated dynastic wealth. Today, similar anxieties have returned, fuelling the belief that unchecked inheritance will entrench a new aristocracy and sap economic dynamism. Before going further, it is worth clarifying what economists typically mean by ‘inheritance’, and what they do not. In the standard literature, inheritances are defined as the total net-of-tax value of all material transfers received at death, including bequests and life-insurance payouts. If a decedent has a positive net worth, the estate is distributed to heirs according to succession rules that usually reflect legal and genetic relationships. What is included depends on national law, but in most cases the focus is on tangible and financial assets: housing, land, businesses, stocks, bonds, and cash. This definition deliberately excludes other powerful forms of intergenerational transmission. Trust arrangements that bypass estates, certain foundations, or wealth shifted well before death may fall outside the taxable inheritance base in some countries. More broadly, families transmit advantages that are never counted as inherited wealth at all: education, social norms, personal connections, reputational capital, and ultimately genetic endowments. These forms of transmission can generate substantial economic value for recipients, but they are conceptually distinct from inherited wealth as measured in economic statistics and taxed in fiscal systems. The analysis that follows is therefore concerned with a specific and narrow phenomenon, material wealth transferred at death, not with the full universe of intergenerational advantage. Turning to what theory and the actual evidence show, a more nuanced picture emerges. While inheritance flows have increased as a share of national income since the mid-20th-century low point, they are not the harbinger of a new Gilded Age that many suppose. Figure 1 below illustrates this complexity using long-run data for France, Sweden and the UK. Around 1900, inheritance flows amounted to roughly 15-25 per cent of national income, compared with 10-15 per cent today. Meanwhile, inheritance flows relative to total private wealth have declined steadily over the past century, with no recent reversal. Most strikingly, while roughly 80 per cent of private wealth in early 20th-century Europe was inherited, that share has fallen to between 40 and 60 per cent, whereas the US has maintained a similarly lower level throughout the past century. In other words, a far greater share of wealth is now accumulated over individual lifetimes through work, saving, and entrepreneurship. This evidence tells a dual story. Inheritance remains economically significant in advanced economies, about 10 per cent of national income is transferred annually from the dead to the living, and roughly half of private wealth has inherited origins. Family legacies still matter. Yet, compared with a century ago, modern economies are markedly more dynamic. Much more wealth today is self-made within a generation, reflecting new savings, business formation, and the democratisation of asset ownership. This shift from old capital to new capital has been more pronounced in some countries than others, but the general direction is clear. Figure 1: The historical evolution of inheritance in Western economies. From ‘Inherited Wealth over the Path of Development: Sweden, 1810-2016’ (2020) by Henry Ohlsson, Jesper Roine and Daniel Waldenström, Journal of the European Economic Association and, for the US, ‘On the Share of Inheritance in Aggregate Wealth: Europe and the USA, 1900-2010’ (2017) by Facundo Alvaredo, Bertrand Garbinti and Thomas Piketty, Economica Going beyond Western countries, Figure 2 below presents newly estimated ratios of annual inheritance flows to national income for Japan, China and India, alongside selected Western economies. These Asian estimates are constructed using the same methodology and publicly available macroeconomic data. In fast-growing, lower-wealth economies such as China and India, inheritance flows amount to a relatively modest 6-7 per cent of national income. By contrast, in richer but more slowly growing economies like Japan, Germany and France, inheritance flows are nearly twice as large, at around 11-12 per cent. This cross-country comparison reinforces the broader mechanism underlying inherited wealth. Sustained capital accumulation raises wealth stocks and, over time, the volume of bequests. In China and India, mortality rates are comparatively low while income growth is rapid, limiting inheritance flows relative to income. In mature economies, higher accumulated wealth and slower income growth generate larger inheritances. The UK stands somewhat apart, with a lower inheritance ratio reflecting both subdued wealth levels and weak income growth. Overall, what appears as an ‘inheritance boom’ is largely a reflection of where countries sit along the development path. Figure 2: Inheritance flows around the world (per cent of national income). Note: inheritance-income ratios are calculated using data on aggregate wealth, population mortality and an assumed average wealth ratio of the diseased to the living, following the stylised model framework presented in ‘On the Long-Run Evolution of Inheritance: France, 1820-2050’ (2011) by Thomas Piketty, Quarterly Journal of Economics What is the actual impact of inheritance on economic inequality? Does it entrench divides, or could it even help narrow them? At this point, some concrete magnitudes help anchor the discussion. What counts as ‘wealthy’ varies widely across countries. In the United States, households in the top 10 per cent of the wealth distribution hold net assets of roughly $1.6 million or more, while entry into the top 1 per cent requires around $11 million. In France, Germany and Sweden, the top decile threshold is closer to €600,000-750,000. Median household wealth, by contrast, is below €200,000 in most European countries, and closer to $190,000 in the US. In Asia, data on wealth thresholds are more uncertain and partly missing, but recent estimates point to lower average wealth levels overall. In Japan, the top decile may begin at roughly $300,000, while in China and India the top 10 per cent typically hold wealth measured in the low hundreds of thousands of dollars rather than in the millions. The same variation applies to what counts as ‘large’ inheritances across countries. In Sweden, the median inheritance is roughly equivalent to half a year’s average disposable income, while inheritances in the top decile are often five to 10 times that amount. In the US, most inheritances are modest, with around half of heirs receiving less than $50,000, while estates exceeding $500,000 account for the majority of total inherited value. In France and Germany, inheritances above €500,000 are uncommon but economically significant, placing heirs immediately in the upper tail of the wealth distribution. A common assumption, prominently associated with Piketty, is that declining inheritance paved the way for meritocracy, while rising inheritance reverses that progress. The logic seems intuitive: rich parents leave large bequests, poor parents leave little, and wealth disparities persist or grow across generations. There is truth in this, but it is incomplete. Wealthy heirs do inherit more in absolute terms. Yet those inheritances often represent only a small addition to already large asset holdings. By contrast, when middle- or lower-wealth individuals inherit, the bequest can be transformative. Inheriting $100,000 barely matters for someone with $10 million, but it can be life-changing for someone with $50,000. Empirical studies across several countries, including the US, Denmark and Sweden, confirm this pattern. Figure 3 below, based on comprehensive Swedish register data, shows that, while heirs in the top wealth decile receive the largest inheritances in kronor terms, these sums constitute only a modest fraction of their existing wealth. For heirs lower in the distribution, smaller bequests often double or triple net worth. As a result, inheritance compresses the wealth distribution among heirs. In Sweden, inheritances reduced the Gini coefficient of wealth inequality by around 7 per cent during the study period, an effect comparable to that of a major stock market downturn. Figure 3: Inheritance and heirs’ wealth: larger gaps and smaller inequality. From ‘Inheritance and Wealth Inequality: Evidence from Population Registers’ (2018) by Mikael Elinder, Oscar Erixson and Daniel Waldenström, Journal of Public Economics Let’s be clear: not everyone receives an inheritance. Those who lack wealthy parents, of course, will not get this boost. This is one reason inheritance can be seen as creating inequality of opportunity between people who come from different family circumstances. But, among those who do inherit, the evidence suggests inheritance tends to even out the distribution a bit, rather than skewing it further. It’s a reminder that most parents, not just the super-rich, leave something to their kids, and those modest bequests, maybe a paid-off house, a small stock portfolio or a bit of savings, can significantly improve the financial security of the less-wealthy majority of heirs. There is, however, another side to the coin. Common sense, and plenty of data, tell us that wealthy parents tend to have wealthier-than-average children. Recent research also quantifies this, finding that a substantial portion of that parent-child wealth correlation is due to inheritances themselves. One study of Swedish multigenerational data found that half or more of the persistence of wealth from one generation to the next can be attributed to inheritances and gifts. In other words, if you remove inheritances from the equation, the resemblance between a rich person’s rank and their child’s rank in the wealth distribution would drop by more than 50 per cent. This is a striking confirmation that inherited capital is a key mechanism by which family financial privilege is maintained. For those who worry about equality of opportunity, this fact is understandably concerning: it implies that the ‘birth lottery’– being born into a wealthy family – still carries a big advantage, primarily because of the legacy of wealth passed down. Is the role of bequests in intergenerational wealth mobility a reason to condemn inheritance as socially harmful? One could certainly argue so. If our goal is a society where everyone truly starts at the same line, inheritances are an obvious head start for some and not for others. Philosophers and economists who champion equal opportunity often cite this as the core justification for inheritance taxes. The idea is that large unearned windfalls violate the meritocratic ideal – why should someone get a million dollars just because of their parents, when others get nothing? This line of reasoning led thinkers from John Stuart Mill to modern policymakers to advocate taxing or even capping inheritances in the interest of fairness. A more targeted way to promote equality of opportunity might be to focus on the beginning of life, not the end. The median age of heirs in Sweden is 55 years, a stage in life when most choices have been made and circumstances are no longer pivotal. Instead, opportunity-equalising policies aim at the beginning of life, like those investing in quality education and universal healthcare. These measures empower individuals lacking family wealth without punishing the act of inheritance per se. The US statesman Benjamin Franklin stated in 1789 that nothing in life is certain except death and taxes. The idea of a levy on inherited wealth has a long pedigree: from ancient times through the modern era, rulers have seen death as a taxable event, and progressives have seen inheritance taxes as a way to prevent the formation of an idle rich. In practice, however, inheritance or estate taxes have repeatedly disappointed on both fairness and efficiency grounds. Around the world, these taxes tend to raise little revenue, distort economic decisions, and often end up riddled with exemptions that undermine their egalitarian intent. The result is a tax that manages to be both unpopular and ineffective, a rare double dud in policy terms. Let’s examine the key issues. As revenue instruments, they yield little. In OECD countries that levy them, inheritance taxes raise around 0.5 per cent of GDP, a trivial sum in tax systems collecting 30-40 per cent of national income. The low yield is partly due to policy design: lawmakers, aware of the tax’s unpopularity, usually set high exemption thresholds and carve out loopholes for certain assets. Valuation and liquidity problems further complicate matters. Heirs to family businesses or illiquid assets may face large tax bills without the cash to pay them, forcing sales that destroy productive enterprises. To avoid this, governments carve out exemptions for businesses and certain assets. These carve-outs, in turn, create inequities and avoidance opportunities. The result is a tax that often misses the largest fortunes while burdening middling estates, an outcome that appears regressive and corrosive to trust. A telling case was Sweden in the late-20th century. Historically, the country had a steep inheritance tax, but it included exemptions for family businesses and other carve-outs. This situation eroded political support across the spectrum. Indeed, Sweden abolished its inheritance tax entirely in 2004, in a reform enacted by a Social Democratic government that faced virtually no opposition from the Left or the Right. The Swedish experience is not unique: many countries that once had inheritance or estate taxes have repealed them over the past few decades, often with broad public approval. For example, Canada already abolished its federal inheritance tax in the 1970s, Australia did the same by the early 1980s, while Austria and Norway scrapped theirs in the 2000s and 2010s, respectively. Germany and France still have inheritance taxes, but they come with so many deductions and with such moderate rates that they are far less onerous than in the past. The US dramatically raised its federal estate tax exemption (now only multimillion-dollar estates pay) and reduced the number of taxable estates to a few thousand per year. The overarching movement is clear: fewer countries are taxing inheritance now than in the 1960s, and those that do generally collect far less from it. Even where the tax remains on paper, it often survives in a hollowed-out form, with numerous exclusions. This does not mean capital should go untaxed. On the contrary, taxes on capital income – profits, dividends, and realised capital gains – have proven to be far more effective at raising revenue with fewer distortions. In all OECD countries, capital income taxes account for much more than 90 per cent of total capital tax revenues, indicating that these are the taxes that work in practice and align with taxpayers’ ability, and their demonstrated willingness, to pay. By contrast, taxing the stock of wealth or imposing a one-time levy at death tends to generate limited revenue while creating substantial distortions, including valuation problems for rarely traded assets and liquidity constraints for taxpayers whose taxable wealth is not linked to cash flows. Research in public finance consistently finds that capital income taxation avoids many of these practical and economic difficulties, making it a more robust instrument for taxing wealth in modern economies. Inheritance sits at a crossroads of human aspirations and social justice. It is at once a deeply personal practice, a final gift from one generation to the next, and a phenomenon with broad economic consequences. We have seen that the role of inherited wealth in Western economies is significant but not overwhelming, and in many respects it has been tempered by modern growth. We’ve also seen that inheritance can have conflicting effects: helping some less-wealthy heirs climb the ladder even as it helps wealthy families stay on top. Inheritance taxes, conceived as a remedy to the inequality that inheritances might foster, have largely failed to live up to their promise and have been cast aside by many countries after decades of frustration. So where does this leave us? Perhaps with the realisation that inherited wealth is a natural byproduct of a healthy, growing economy, not an aberration to be eliminated. People build and pass on wealth for the same reasons they engage in any long-term enterprise: to better their family’s condition, to create a legacy, to contribute to their loved ones’ futures. These motivations drive productive activities that benefit society at large – investments, businesses, philanthropy. Curtailing them too harshly could sap that vitality. And when wealth does get passed down, the outcome is not uniformly pernicious; often it spreads capital to places it’s needed, financing new opportunities. Of course, none of this is to deny that large inherited fortunes can give undue advantages. But there are more direct and constructive ways to address that concern than by taxing inheritances across the board. For instance, ensuring high-quality education for all helps level the playing field at the start of life, so that even those without wealthy parents have the skills to prosper. Encouraging entrepreneurship and home ownership for a broad swath of the population gives more people a chance to accumulate assets within their own generation, which eventually also give them something to bequeath. In essence, the key to a fairer society is not to tear down the wealth of the past, but to empower more people to build the wealth of the future. The Western world has seen trends of the past century showing broad-based growth and democratisation of capital through homeownership, pension systems, and broader stock ownership. These have done more to reduce inequality than any inheritance tax ever has. Countries became more equal when ordinary citizens gained wealth, not when a few rich heirs were taxed a bit more. This is a crucial lesson. It suggests that if we want to continue the progress toward a prosperous and equitable society, we should focus on policies that expand opportunities and enable the many to share in wealth creation, rather than fixating on slicing up the estates of the few. Source of the article

GOATReads: History

Antisemitism

Antisemitism, sometimes called history’s oldest hatred, is hostility or prejudice against Jewish people. The Nazi Holocaust is history’s most extreme example of antisemitism. Antisemitism did not begin with Adolf Hitler: Antisemitic attitudes date back to ancient times. In much of Europe throughout the Middle Ages, Jewish people were denied citizenship and forced to live in ghettos. Anti-Jewish riots called pogroms swept the Russian Empire during the 19th and early 20th centuries, and antisemitic incidents have increased in parts of Europe, the Middle East and North America in the last several years. The term antisemitism was first popularized by German journalist Wilhelm Marr in 1879 to describe hatred or hostility toward Jews. The history of antisemitism, however, goes back much further. Hostility against Jews may date back nearly as far as Jewish history. In the ancient empires of Babylonia, Greece, and Rome, Jews—who originated in the ancient kingdom of Judea—were often criticized and persecuted for their efforts to remain a separate cultural group rather than taking on the religious and social customs of their conquerors. With the rise of Christianity, antisemitism spread throughout much of Europe. Early Christians vilified Judaism in a bid to gain more converts. They accused Jews of outlandish acts such as “blood libel”—the kidnapping and murder of Christian children to use their blood to make Passover bread. These religious attitudes were reflected in anti-Jewish economic, social and political policies that pervaded into the European Middle Ages. Antisemitism in Medieval Europe Many of the antisemitic practices seen in Nazi Germany actually have their roots in medieval Europe. In many European cities, Jews were confined to certain neighborhoods called ghettos. Some countries also required Jews to distinguish themselves from Christians with a yellow badge worn on their garment, or a special hat called a Judenhut. Some Jews became prominent in banking and moneylending, because early Christianity didn’t permit moneylending for interest. This resulted in economic resentment which forced the expulsion of Jews from several European countries including France, Germany, Portugal and Spain during the fourteenth and fifteenth centuries. Jews were denied citizenship and civil liberties, including religious freedom throughout much of medieval Europe. Poland was one notable exception. In 1264, Polish prince Bolesław the Pious issued a decree allowing Jews personal, political and religious freedoms. Jews did not receive citizenship and gain rights throughout much of western Europe, however, until the late 1700s and 1800s. Russian Pogroms Throughout the 1800s and early 1900s, Jews throughout the Russian Empire and other European countries faced violent, anti-Jewish riots called pogroms. Pogroms were typically perpetrated by a local non-Jewish population against their Jewish neighbors, though pogroms were often encouraged and aided by the government and police forces. In the wake of the Russian Revolution, an estimated 1,326 pogroms are thought to have taken place across Ukraine alone, leaving nearly half a million Ukrainian Jews homeless and killing an estimated 30,000 to 70,000 people between 1918 and 1921. Pogroms in Belarus and Poland also killed tens of thousands of people. Nazi Antisemitism Adolf Hitler and the Nazis rose to power in Germany in the 1930s on a platform of German nationalism, racial purity and global expansion. Hitler, like many antisemites in Germany, blamed the Jews for the country’s defeat in World War I, and for the social and economic upheaval that followed. Early on, the Nazis undertook an “Aryanization” of Germany, in which Jews were dismissed from civil service, Jewish-owned businesses were liquidated and Jewish professionals, including doctors and lawyers, were stripped of their clients. The Nuremberg Laws of 1935 introduced many antisemitic policies and outlined the definition of who was Jewish based on ancestry. Nazi propagandists had swayed the German public into believing that Jews were a separate race. According to the Nuremberg Laws, Jews were no longer German citizens and had no right to vote. Kristallnacht Jews became routine targets of stigmatization and persecution as a result. This culminated in a state-sponsored campaign of street violence known as Kristallnacht (the “night of broken glass”), which took place between November 9-10, 1938. In two days, more than 250 synagogues across the Reich were burned and 7,000 Jewish businesses looted. The morning after Kristallnacht, 30,000 Jewish men were arrested and sent to concentration camps. Holocaust Prior to Kristallnacht, Nazi policies toward Jews had been antagonistic but primarily non-violent. After the incident, conditions for Jews in Nazi Germany became progressively worse as Hitler and the Nazis began to implement their plan to exterminate the Jewish people, which they referred to as the “Final Solution” to the “Jewish problem.” Between 1939 and 1945, the Nazis would use mass killing centers called concentration camps to carry out the systematic murder of roughly 6 million European Jews in what would become known as the Holocaust. Antisemitism in the Middle East Antisemitism in the Middle East has existed for millennia, but increased greatly since World War II. Following the establishment of a Jewish State in Israel in 1948, the Israelis fought for control of Palestine against a coalition of Arab states. At the end of the War, Israel kept much of Palestine, resulting in the forced exodus of roughly 700,000 Muslim Palestinians from their homes. The conflict created resentment over Jewish nationalism in Muslim-majority nations. As a result, antisemitic activities grew in many Arab nations, causing most Jews to leave over the next few decades. Today, many North African and Middle Eastern nations have little Jewish population remaining. Antisemitism in Europe and the United States Antisemitic hate crimes have spiked in Europe in recent years, especially in France, which has the world’s third largest Jewish population. In 2012, three children and a teacher were shot by a radical Islamist gunman in Toulouse, France. In the wake of the mass shooting at the satirical weekly newspaper Charlie Hebdo in Paris in 2015, four Jewish hostages were murdered at a Kosher supermarket by an Islamic terrorist. The U.K. logged a record 1,382 hate crimes against Jews in 2017, an increase of 34 percent from previous years. In the United States, antisemitic incidents rose 57 percent in 2017—the largest single-year increase ever recorded by the Anti-Defamation League, a Jewish civil rights advocacy organization. 2018 saw a doubling of antisemitic assaults, according the ADL, and the single deadliest attack against the Jewish community in American history—the October 27, 2018 Pittsburgh synagogue shooting. Source of the article

The machines that made Manchester

Within the city’s Science and Industry Museum, a whirling, spinning array of engines are still the stars of the show in a fine £18.9m makeover The site of the Science and Industry Museum in Manchester is, as the official blurb puts it without undue hyperbole, a place that “changed the world”. The first purpose-built passenger railway, the Liverpool and Manchester, started running from here in 1830, and over the next century or more it helped fuel the industrial and commercial beast of the city around it, its trains running in and out to feed factories with cotton, and people with imported food. It is, according to the former Science Museum director Neil Cossons, the “Stonehenge of the Industrial Revolution”. Now it’s a 2.6-hectare (6.5-acre) zone of heritage and open space, standing amid the real estate explosion of modern Manchester, whose shiny towers pop up in the background. Iron and brick structures, originally arranged according to the hard functional logic of gradients and track radiuses, are honoured with Grade I and Grade II Iistings. Layouts dictated by the needs of machines are gradually being turned over to gentler uses, as open spaces accessible to the growing population of central Manchester. The museum, first installed here in 1983, is in the middle of a comprehensive makeover of the whole site, planned for completion in time for the bicentenary of the railway in 2030. In the middle of it stands the Power Hall, a 108-metre-long (about 355ft) shipping shed where, from 1855, goods trains were once unloaded by men and cranes. It now houses a whirling, spinning, chuntering array of machines that previously performed tasks as diverse as powering hundreds of looms in a Rochdale mill, a dough mixer in a bakery, church organs and chip shops. There are the kind of engines that stand still in a factory, and locomotives: a grey, 2 8-wheel monster that hauled long coal trains across the hilly terrain of South Africa; and a tank engine that pulled holiday trains on the Isle of Man, now partly cut away so you can see its inner workings. There’s a replica of the Planet, a tall-chimneyed yellow teapot of a vehicle that, when it ran on the newly opened Liverpool and Manchester railway, was cutting-edge technology. Most were powered by steam, some by gas or oil or electricity, and there’s a pumping engine from Manchester’s Victorian hydraulic power system that, until 1972, lifted grain sacks in warehouses and safety curtains in theatres with pressurised water run through pipes under the city’s streets. Almost all were made in or near Manchester, many of them rescued by enthusiasts from the once proud businesses that ran them when they fell victim to new technologies and overseas competition in the second half of the last century. Many are kept running by the museum, so you can see and hear their different speeds and rhythms, and smell their oil. This week a rejuvenated version of this temple of manufacture reopens as Power Hall: the Andrew Law Gallery, named after a Manchester-raised hedge fund manager whose foundation has helped fund the £18.9m cost of the project. It’s an undertaking where a problem – the shed’s leaking roof needed fixing – was made into an opportunity, which was to present the industrial ballet of the engines better than ever before. There was also a wish – superficially oxymoronic – to make this commemoration of fossil-fuel machinery, with the help of the engineers Max Fordham, as sustainable as can be. For if, as the museum says, these machines helped make the modern world, that includes pollution and the climate crisis, as well as mass-produced clothing. Factories ran off arrays of boilers that each consumed about three tonnes of coal – the weight of a male white rhinoceros – every day. Average life expectancy in Manchester in the 1850s was 31, partly as a result of respiratory diseases caused by all the smoke. Then again, science and industry move on, and the ingenuity that once went into steam engines now goes into less damaging alternatives. So the new Power Hall is run off a heat pump that draws its energy from an aquifer 90 metres below ground. The old machines are now powered by steam heated from this source, and the three-dimensional labyrinth of silvery pipes and boilers that help to make this happen is on public view – an exhibit in itself. Source of the article

Could depreciation be the spike that bursts the AI bubble?

AI firms may be replacing their chips faster than they let on, cutting into their profits. But the impact is hard to quantify Demis Hassabis, the chief of Google’s DeepMind, suggested last week that he thinks parts of the AI industry are “bubble-like” and that “multibillion-dollar seed rounds in new start-ups that don’t have a product” are unsustainable. Is he deflecting? Are startup valuations really the worry? Or is it the hyper-scalers themselves? The infamous investor Michael Burry recently claimed that companies, including Meta, Google, Oracle, Microsoft and Amazon, are artificially boosting earnings by extending the useful life of their data centre assets. The argument is predicated on the assumption that Nvidia, the key supplier of data centre chips, is innovating at a pace that shortens the “useful life” of data centres, rendering them obsolete. Take this example laid out by one tech expert: Nvidia is now bringing out a new generation of chips every 12 to 18 months. Blackwell chips came out the first quarter of 2025, and Vera Rubin will be launched in Q3 of 2026. However, the way the industry is currently set up means that the prices of previous generation chips can drop by up to 50% the moment a new generation of chips comes to the market. This is a headache for data centre providers. At the moment, servers have a useful life of four to five years by book value. Under a normal depreciation cycle, that would mean 20% write-down in value. But if chips devalue over two years, a $10bn data centre would have to write off $5bn a year, meaning the data centre needs to generate about $20bn in revenue. If it turns out tech firms are having to replace AI equipment more frequently than they’re letting on, this would cut into profits and would make it more expensive for them to raise capital, analysts say. The upshot, says one source, is that providers “are having to find creative ways to monetise assets”. But making precise calculations about how depreciation could affect values is difficult. The AI boom has been happening since 2022. There is no real track record for how long chips last compared with other types of heavy equipment that businesses have been using for decades. Source of the article

GOATReads:Politics

What India's Republic Day red carpet means for its foreign policy

India will mark its 77th Republic Day on 26 January - the day when the country adopted its constitution and formally became a republic, breaking from its colonial past. The annual grand parade will take place along Delhi's iconic central boulevard, with military tanks rolling past and fighter jets roaring overhead as thousands watch. The parade is a spectacle in itself, but attention is also focused on who is occupying the most prominent seats at the ceremony. This year, it will be European Commission President Ursula von der Leyen and European Council President António Costa. India has invited them as chief guests for the celebrations, placing the European Union at the centre of one of the country's most prestigious state events. On this day, India turns the heart of its capital into a stage. Thousands of troops march before cheering crowds, armoured vehicles move down the Kartavya Path (formerly Rajpath or King's Avenue) and colourful tableaux or floats pass by spectators in Delhi, while millions more watch on their screens across the country. The parade is presided over by the Indian president, with the chief guest seated alongside - closer to the presidential chair than even the senior-most government officials. Who sits next to India's president has long been read as more than a matter of protocol. Over the decades, the choice of chief guest has come to be closely watched as an indicator of India's foreign policy priorities and the relationships Delhi wants to highlight at a particular moment, experts say. The practice began in 1950 with the then Indonesian president, Sukarno, attending India's first Republic Day parade. In its early years as a republic, India prioritised ties with other newly-independent countries - a focus reflected in its early choice of chief guests. Since then, the parade has hosted leaders from across the world, reflecting shifts in India's global relations and strategic priorities. The chief guests have been from leaders of neighbouring countries - such as Bhutan and Sri Lanka - to heads of state and government from major powers, including the US and the UK. The UK has featured as the chief guest five times - including Queen Elizabeth II and Prince Philip - reflecting the long and complex history between the two countries. Leaders from France and Russia (formerly Soviet Union) have also been invited nearly five times since 1950, reflecting India's long-standing strategic ties with the two countries. With such a wide range of past guests, the question is how India decides who receives an invitation in a particular year. The selection process is largely out of public view. Former diplomats and media reports say it typically begins within the foreign ministry, which prepares a shortlist of potential invitees. The final decision is taken by the prime minister's office, followed by official communication with the select countries - a process that can take several months. A former foreign ministry official speaking on condition of anonymity said: "Strategic objectives, regional balance and whether a country has been invited before are all taken into account." Former Indian ambassador to the US Navtej Sarna said a lot of thought goes into the decision making. "It's a balance between important partners, neighbours and major powers," he said, adding that availability of the state leader during that time also plays a crucial role. Foreign policy analyst Harsh V Pant said the evolving list of chief guests mirrors India's changing engagement with the world. "If you think of the EU delegation this year, with its leadership coming, it's very clear that we are doubling down on our engagement with the EU." He added that most likely a trade deal would be announced - signalling that India and the European bloc are on the same page when it comes to the current geopolitical situation. This comes as India continues to engage with the US on a trade deal. The talks, which have been going on for almost a year, have strained their relationship since the US imposed 50% tariffs on Indian goods, the highest in Asia, including penalties linked to India's purchase of Russian oil. "It [the choice of the parade's chief guest] gives you a sense of India's priorities at that particular point - which geography it wants to focus on, or whether there is a milestone it wants to mark," Pant said, pointing out that India continued to engage closely with the global south. In 2018, for example, leaders of the Association of Southeast Asian Nations (Asean) were invited as chief guests. It was the first time a regional grouping was invited - marking 25 years of India's engagement with the bloc, Pant added. At the same time, some absences from the guest list have also reflected strained relationships. Pakistani leaders attended twice as chief guests before the neighbours went to war in 1965. Islamabad has not been invited thereafter - a sign of continuing strain in ties. The only time China attended was when Marshal Ye Jianying came in 1958, four years before the two countries went to war over their disputed border. But the significance of Republic Day extends beyond diplomacy and guest lists. Analysts says India's parade stands apart from similar military displays elsewhere in the world for a number of reasons. The fact that India has a guest almost every year is one of them. Also, for most countries, these parades commemorate military victories. Like Russia's Victory Day marks the defeat of Germany in World War Two, France's Bastille Day celebrates the start of the French Revolution and the eventual fall of the monarchy, and China's military parade marks their victory over Japan in World War Two. India's celebration, by contrast, is centred on the constitution, says Pant. "For many other countries, these celebrations are related to victories in war. We don't celebrate that. We celebrate becoming a constitutional democracy - the coming into effect of the constitution." Unlike military parades in many Western capitals, India's Republic Day also blends displays of its military capability with cultural performances and regional tableaux, projecting both power and diversity. Beyond strategy and symbolism, the parade often leaves a more personal impression on visiting leaders. The former official who spoke anonymously recalled how the Obamas were particularly struck by the camel-mounted contingents - a moment that stayed with them long after the formal ceremonies ended. Source of the article

How Big Tech killed literary culture

The philistines are in charge now “I would never read a book,” declared crypto kingpin Sam Bankman-Fried in a fawning profile published by the venture capital firm Sequoia in September 2022. “I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. I think, if you wrote a book, you fucked up.” When SBF, as he likes to be called, was arrested for fraud and money laundering a couple of months later, journalists and other literary sorts seized on his words as evidence of moral corruption. Expressing a disdain for books is not only “ignorant and arrogant”, wrote The Atlantic’s Thomas Chatterton Williams; it signals “a much larger deficiency of character”. “Sam Bankman-Fried doesn’t read,” ran the headline of a Washington Post editorial. “That tells us everything.” But if SBF’s comeuppance provided comfort to the bookish set, it was of an icy variety. In dismissing books as outdated information-delivery devices, the young entrepreneur wasn’t saying anything out of the ordinary. He was giving voice to the zeitgeist. The philistines have taken over the culture. What we’re seeing as we enter 2026 is a reversal of the situation C. P. Snow described in his celebrated 1959 lecture “The Two Cultures”. Snow, a Cambridge physicist turned popular novelist, argued that the culture of the West had split into two camps. On one side were “literary intellectuals” — novelists, poets, artists, critics. On the other were what we would today call STEM types — scientists, technologists, engineers, mathematicians. Between the two lay “a gulf of mutual incomprehension”. The intellectuals took pride in their lack of interest in scientific and technical developments, while the boffins remained largely ignorant of everything encompassed by the then-common phrase “high culture”. The two camps, Snow observed, might as well have lived on different planets. Along with the knowledge divide came a power divide. To the public, the literary intellectuals — Snow pointed to the poet and critic T. S. Eliot as the “archetypal figure” — constituted the cultural elite. They were the ones who appeared in glossy magazines and high-brow television shows. They were the ones whose words determined what was worth talking about, whose tastes established the boundary between the great and the trifling. The scientists and engineers, with a few notable exceptions, remained out of the public eye, toiling in lab-coated anonymity. Their discoveries and inventions were shaping the future, but as individuals they held little cultural currency. Today, the divides remain, but the power dynamic has been turned on its head. The STEM camp, in particular its technological wing, dominates the culture. Techies take prominent seats at presidential inaugurations and White House banquets. Their words and actions set much of the public’s daily agenda. And not only are they ubiquitous presences in the media; they’ve come to control the media, as news and entertainment have shifted onto the digital platforms they control. Mark Zuckerberg, Sam Altman, Jeff Bezos, Elon Musk: these are our new T. S. Eliots. As for old-style public intellectuals, they’ve disappeared from the scene. There are still serious writers and artists and critics, but they work in insular obscurity, speaking less and less to the general public and more and more to one another. In universities, humanities departments shrink as students flock to STEM fields in pursuit of higher pay and greater status. In primary and secondary schools, art, music, and library programmes are cut to free up money for EdTech investments. Those who once created and celebrated high culture now seem ashamed even to speak the phrase. They’ve come to doubt their own worth. The cultural shift runs deep. People’s skills and habits are changing along with their perceptions. In the last 20 years, the number of Americans who read for pleasure plummeted by 40%, according to a University of Florida study published last summer. A third of US kids now graduate high school without basic reading skills, and a quarter of adults read at a third-grade level or worse. In the UK, the percentage of children who enjoy reading in their spare time fell from 55% to 33% over the last 10 years, according to a long-running National Literacy Trust survey. If the expansion of reading and writing was a defining characteristic of the 19th and 20th centuries, their withering is a marker of the 21st. As Times columnist James Marriott writes, “Welcome to the post-literate society.” Those who built fortunes atop the rubble of literary culture smell victory. “Technology is the glory of human ambition and achievement, the spearhead of progress, and the realization of our potential,” proclaimed the venture capitalist Marc Andreessen in “The Techno-Optimist Manifesto”, a much-discussed 2023 blog post. He continued: “Technology is liberatory. Liberatory of human potential. Liberatory of the human soul, the human spirit. Expanding what it can mean to be free, to be fulfilled, to be alive. We believe technology opens the space of what it can mean to be human.” One need not be blind to the joys and bounties of technological innovation to find such triumphalist blather repugnant. (In what might generously be interpreted as an adolescent attempt at provocation, Andreessen echoes the words of “The Futurist Manifesto” written more than a century ago by the proto-fascist F. T. Marinetti.) But, like SBF’s remark a year earlier, Andreessen’s screed catches the tenor of our time. It elevates technology to the position that art, literature, and even religion used to hold. Technology is the highest expression of human creativity, the dwelling-house of soul and spirit. Our instruments and our systems, not our inner lives, provide us with meaning and purpose. In Silicon Valley’s rigidly utilitarian conception of the human condition, there’s no room for imagination or aesthetics, metaphysics or faith. As for beauty, we’ll have to content ourselves with discovering it in an app interface. The rise of generative artificial intelligence, as both a practical technology and a popular obsession, crystallises Big Tech’s cultural takeover. AI turns reading and writing into automated industrial routines, optimised for speed and efficiency. It instrumentalises intellect, allows the critical and creative work of the mind to be outsourced to machinery. The public, whatever fears it may have about the ultimate consequences of the technology, seems happy to employ it in myriad ways to save time and money. In his most influential essay, “Tradition and the Individual Talent”, T. S. Eliot argued that poets and other artists, through years of deep reading and careful observation, forge within themselves an understanding of the works in their field so deep it turns instinctual. Beyond knowledge, they gain “the consciousness of the past”. Having internalised tradition, they can then apply the singular power of their own sensibility to transform it into something new, something that transcends the personal to become universal. Individual talent becomes the crucible that sustains tradition by carrying it, in new forms, into the future. In the aggregate, all these intricately connected acts of creation form the culture. Generative AI gives us a parody of Eliot’s creative process. Tradition is replaced by a vast statistical model constructed of digital representations of the works of the past. Individual talent is replaced by a prediction function that mindlessly extracts patterns of data from the model and serves them up in the form of text, image, or sound. “Poetry,” Eliot wrote, “is not a turning loose of emotion, but an escape from emotion; it is not the expression of personality, but an escape from personality.” He immediately added a crucial clarification: “Of course, only those who have personality and emotions know what it means to want to escape from these things.” Bots like ChatGPT, Gemini, and Claude have no personality or emotions to escape from. In speaking to us with the voice of nobody, they mock us. Their hollowness is our hollowness. There’s a deep irony here. Under the reign of the STEM lords, the public’s trust in science and scientists is decaying. Far from promoting empiricism and objectivity, the cult of virtuality is encouraging a return to superstition, subjectivity, and myth. The overthrow of the old intellectual elite and its replacement by the new technological elite, we can now see, opened a cultural hole that has been filled by ideological fervour, financial speculation, and prideful ignorance, all amplified by a digital communication system that, in its inhuman scale and speed, leaves little room for reflection or discretion. Whatever their shortcomings, the old avatars of high culture shared a set of values — a respect for talent, taste, and tradition; a disdain for the flimsy and the fake; a commitment to rigour in thought and expression — that encouraged the pursuit of excellence not only in art and literature but in science, engineering, and other practical pursuits. If Snow were alive today, he might feel nostalgic about the state of affairs he bemoaned at the dawn of the Sixties. He might now see that the divide between the two cultures was not as sharp as he supposed. Their differences masked a deeper bond. When, in a recent jailhouse interview, Bankman-Fried revealed he had started reading novels, I couldn’t help but think of the concluding couplet of W. H. Auden’s “In Memory of W. B. Yeats”: In the prison of his days Teach the free man how to praise. Maybe there’s hope. Source of the article

GOATReads: Psychology

Why Do Some People Thrive on So Little Sleep?

Short sleepers cruise by on four to six hours a night and don’t seem to suffer ill effects Everyone has heard that it’s vital to get seven to nine hours of sleep a night, a recommendation repeated so often it has become gospel. Get anything less, and you are more likely to suffer from poor health in the short and long term—memory problems, metabolic issues, depression, dementia, heart disease, a weakened immune system. But in recent years, scientists have discovered a rare breed who consistently get little shut-eye and are no worse for wear. Natural short sleepers, as they are called, are genetically wired to need only four to six hours of sleep a night. These outliers suggest that quality, not quantity, is what matters. If scientists could figure out what these people do differently it might, they hope, provide insight into sleep’s very nature. “The bottom line is, we don’t understand what sleep is, let alone what it’s for. That’s pretty incredible, given that the average person sleeps a third of their lives,” says Louis Ptáček, a neurologist at the University of California, San Francisco. Scientists once thought sleep was little more than a period of rest, like powering down a computer in preparation for the next day’s work. Thomas Edison called sleep a waste of time—“a heritage from our cave days”—and claimed to never sleep more than four hours a night. His invention of the incandescent lightbulb encouraged shorter sleep times in others. Today, a historically high number of U.S. adults are sleeping less than five hours a night. But modern sleep research has shown that sleep is an active, complicated process we don’t necessarily want to cut short. During sleep, scientists suspect that our bodies and brains are replenishing energy stores, flushing waste and toxins, pruning synapses and consolidating memories. As a result, chronic sleep deprivation can have serious health consequences. Most of what we know about sleep and sleep deprivation stems from a model proposed in the 1970s by a Hungarian Swiss researcher named Alexander Borbély. His two-process model of sleep describes how separate systems—circadian rhythm and sleep homeostasis—interact to govern when and how long we sleep. The circadian clock dictates the 24-hour cycle of sleep and wakefulness, guided by external cues like light and darkness. Sleep homeostasis, on the other hand, is driven by internal pressure that builds while you’re awake and decreases while you’re asleep, ebbing and flowing like hunger. There’s variation in these patterns. “We’ve always known that there are morning larks and night owls, but most people fall in between. We’ve always known there are short sleepers and long sleepers, but most people fall in between,” says Ptáček. “They’ve been out there, but the reason that they haven’t been recognized is that these people generally don’t go to doctors.” That changed when Ptáček and his colleague Ying-Hui Fu, a human geneticist and neuroscientist at the University of California, San Francisco, were introduced to a woman who felt that her early sleep schedule was a curse. The woman naturally woke up in the wee hours of the morning, when it was “cold, dark and lonely.” Her granddaughters inherited her same sleep habits. The researchers pinpointed the genetic mutation for this rare type of morning lark, and after they published their findings, thousands of extreme early risers came out of the woodwork. But Fu recalls being intrigued by one family who didn’t fit the pattern. These family members woke up early but didn’t go to bed early, and they felt refreshed after only about six hours of sleep. They were the first people identified with familial natural short sleep, a condition that runs in families like other genetic traits. Fu and Ptáček traced their abbreviated slumber to a mutation in a gene called DEC2. The researchers went on to genetically engineer the DEC2 mutation into mice, showing that the animals need less sleep than their littermates. And they found that one of the gene’s jobs is to help control levels of a brain hormone called orexin, which promotes wakefulness. Interestingly, orexin deficiency is a leading cause of narcolepsy, a sleep disorder marked by episodes of excessive daytime sleepiness. In people with short sleep, however, orexin production appears to be increased. Over time, the team has identified seven genes associated with natural short sleep. In one family with three generations of short sleepers, the researchers found a mutation in a gene called ADRB1, which is highly active in a region of the brain stem, the dorsal pons, that’s involved in regulating sleep. When the scientists used a technique to stimulate that brain region in mice, rousing them from their sleep, mice with the ADRB1 mutation woke more easily and stayed awake longer. In a father-son pair of short sleepers, the researchers identified a mutation in another gene, NPSR1, which is involved in regulating the sleep-wake cycle. When they created mice with the same mutation, they found that the animals spent less time sleeping and, in behavioral tests, lacked the memory problems that typically follow a short night’s sleep. The team also found two distinct mutations in a gene called GRM1, in two unrelated families with shortened sleep cycles. Again, mice engineered with those mutations slept less, with no obvious health consequences. Like mice, people who are naturally short sleepers seem to be immune to the ill effects of sleep deprivation. If anything, they do extraordinarily well. Research suggests that such people are ambitious, energetic and optimistic, with remarkable resilience against stress and higher thresholds for pain. They might even live longer. Based on the findings in short sleepers, some researchers think it may be time to update the old two-process model of sleep, which is how Ptáček developed the idea of a third influence. The updated model might unfold like this: In the morning, the circadian clock indicates it is time to start your day, and sleep homeostasis signals you’ve gotten enough sleep to get out of bed. Then a third factor—behavioral drive—compels you to go out and do your job, or find a mate, or gather sustenance. At night, the process goes in reverse, to calm the body down for sleep. Perhaps short sleepers are so driven that they are able to overcome the innate processes that keep others in bed. But it may also be that, somehow, the brains of short sleepers are built to sleep so efficiently that they are able to do more with less. “It’s not like there’s something magical about your seven to eight hours,” says Phyllis Zee, director of the Center for Circadian and Sleep Medicine at Northwestern University. Zee can imagine countless ways that short sleepers’ brains could be more efficient. Do they have more slow-wave sleep, the most restorative sleep stage? Do they generate higher amounts of cerebrospinal fluid, the liquid that bathes the brain and spinal cord, enabling them to get rid of more waste products? Is their metabolic rate different, helping them cycle in and out of sleep more quickly? “It’s all about efficiency, sleep efficiency—that’s how I feel,” says Fu. “Whatever their body needs to do with sleep, they can get it done in a short time.” Recent studies from Fu and Ptáček suggest that naturally short sleepers may be more efficient at removing toxic brain aggregates that contribute to neurodegenerative disorders like Alzheimer’s disease. The researchers bred mice that had short sleep genes with mice that carried genes predisposing them to Alzheimer’s. The Alzheimer’s mice developed a buildup of abnormal proteins—amyloid plaques and tau tangles—that, in humans, are hallmarks of dementia. But the brains of the hybrid mice developed fewer of these tangles and plaques, as if the sleep mutations were protecting the animals. Fu believes that if she conducted similar studies in models of heart disease, diabetes or other illnesses associated with sleep deprivation, she would get similar results. It isn’t yet clear how the short sleeper genes identified thus far shield people from the ill effects of poor sleep, or how the mutations in these genes make sleep more efficient. To get at the answer, Fu and Ptáček started bringing short sleepers to their joint laboratory to measure their brain waves while they slept. Their sleep study was derailed by the Covid-19 pandemic, but they are eager to get it back on track. The researchers are also interested in understanding other sleep outliers. Sleep duration, like most behaviors, follows a bell curve. Short sleepers sit on one end of the curve, long sleepers on the other. Fu has found one genetic mutation associated with long sleep, but long sleepers are challenging to study because their schedules don’t align with the norms and demands of society. Long sleepers are often forced to get up early to go to school or work, which can result in sleep deprivation and may contribute to depression and other illnesses. But though sleep has a strong genetic component, it can also be shaped by the environment. Knowing that better sleep is possible, and understanding the basis, could point the way to interventions to optimize sleep, enabling more people to live longer, healthier lives. Zee’s lab, for example, has tinkered with using acoustic stimulation to boost the slow waves of deep sleep that enhance memory processing and may be one of the secrets to short sleepers’ success. In a study, they played pink noise—a softer, more natural sound than white noise, more akin to rain or the ocean—while study participants slept. The next day those participants remembered more in a test of learning and recalling word pairs. “We can enhance memory, but we’re not making them sleep longer or necessarily shorter,” says Zee. “I think there’s a lot more to learn.” For now, researchers recommend that people focus on getting the amount of sleep they need, recognizing it will be different for different people. Ptáček still bristles when he hears someone preach that everybody has to sleep eight hours a night. “That's like saying everybody in the population has to be 5 foot 10,” he says. “That's not how genetics works.” Source of the article

Did Water Form in the Earliest Years of the Universe?

A recent study suggests huge volumes of the molecule emerged during the cosmic dawn Do us a favor: take a sip of water. Done it? Good. You probably needed rehydrating, but more importantly, I need to tell you something about the universe. Did you know that some of those water molecules were filtered through the trunk of an ancient tree that grew on Antarctica long before any ice covered it? Those same molecules were also once stolen by a plant that graced a hilltop on a planet that had yet to see a single flower. Before that, a mighty dinosaur drank from a pool that was once home to at least one of those molecules of water. The very first form of life, a microbe of some sort, may have been wriggling about on an effervescing hydrothermal vent as that molecule drifted through the abyssal depths of a long-forgotten sea. And billions of years ago, icy comets and soggy asteroids delivered that water molecule—and so many more like it—to a young world named Earth. But where did all that water originally come from? Most of the matter we interact with, made from plenty of the elements on the periodic table, was forged in the cataclysmic final seconds of countless stars that had exhausted their supplies of nuclear fuel. Hydrogen and oxygen, the two atomic components of common water, aren’t rare—and after enough stars had died in our corner of the Milky Way, it’d have a decent supply of water. But how old could some of that water be? Where, and when, did the very first droplets of water in the history of the universe form? Telescopes looking at the farthest reaches of space have found that abundances of water existed less than two billion years after the Big Bang. But a recent study, published in the journal Nature Astronomy, suggests something rather explosive: Water may have been present as early as 100 million to 200 million years after the universe came to be. According to the authors’ simulations, huge volumes of it were formed very close to, or at, cosmic dawn—the moment the very first generation of stars set the dark skies ablaze with light. It's difficult to overstate just how surprisingly early to the party this water may have been. “This suggests that water, the primary ingredient for life, existed even before the building blocks of our own galaxy were formed,” says Muhammad Latif, an astrophysicist at the United Arab Emirates University, and one of the study’s authors. There are some major caveats to this research. The team didn’t detect this ancient water; they used simulations of an as-yet-unseen type of star to understand how early on that water could have formed under certain conditions. But thanks to the high fidelity of these simulations, if these primordial stars were around at cosmic dawn, then this is probably how they would have died—with a bang, and a splash. “The simulations are state-of-the-art. So yes, the results are reliable and believable,” says Mike Norman, a physicist at the University of California, San Diego, who was not involved with the new research. And if these virtual recreations of stellar self-destruction are windows into the very distant past, then that might also mean our own waterlogged, paradisiacal world is just one in a considerably long line of oceanic planets. “The dense water cores are potential hosts of proto-planetary disks which may even lead to habitable planets forming at cosmic dawn,” says Latif. “In nutshell, life could have originated much earlier than previously thought.” The cosmos is built by chaos. Stars inevitably die, in a variety of spectacular ways, and in doing so create then scatter a multitude of elements out into space. The most violent of these deaths are associated with truly giant stars and are known as supernovas—explosions that sometimes outshine entire galaxies. Sometimes these stars simply burn through all their internal fuel reserves and implode under their own immense gravity. Other times, a voracious star eats too much of a companion star nearby and gives itself a destructive bout of thermonuclear indigestion. Either way, supernovas produce a bevy of elements, from the lighter common ones to the rarer heavier ones. As I write this, I find myself glancing at my wedding ring. It’s made of tantalum, a blueish-silver metal. It may have been mined somewhere on Earth in the not-too-distant past, but originally, it was molded in the heart of an expiring star—either a smaller one that had ballooned into a red giant or a giant crucible that ignited into a supernova. That ring may be a symbol of affection in the extreme, but it’s also the shiny wreckage of a cosmic lighthouse. Water is also a byproduct of star death, but comparing it to something like tantalum might seem odd. After all, water is pretty much everywhere we look, from Earth’s oceans to the solar system’s myriad icy moons, all the way out to distant planets orbiting alien stars. In today’s universe, forming water is also quite easy: All one needs to stick two run-of-the-mill hydrogen atoms to one oxygen atom in a sufficiently cold patch of an already frigid universe. But it wasn’t always so effortless to keep the cosmos hydrated. Unless you formed a lot of water everywhere all at once, cosmic radiation and the high temperature conditions around exploding stars would threaten to disintegrate all those water molecules long before any seas had a chance at forming. Along with his colleagues, Latif was curious: When, exactly, was water first able to emerge? Naturally, their thoughts turned to the very first furnaces in the universe. Around 400,000 years after the Big Bang, the first hydrogen and helium atoms popped up before being sucked into pockets of so-called dark matter. Once in those pockets, those atoms were squashed by gravity. Eventually, they were so thoroughly compressed that nuclear fusion got going—and boom, the very first stars lit up the universe. Astronomers have decided to give these primordial stars a counterintuitive name: Population III stars. Population II stars are the descendants of Population III stars, crafted from their detritus, while newcomers like our sun are known as Population I stars. They may have a bit of silly name, but Population III stars are remarkably important. As Latif and his colleagues write in their recent study, these stars, and their supernovas, “were the first nucleosynthetic engines in the universe, and they forged the heavy elements required for the later formation of planets and life.” These stars were supermassive, and they burned brightly and swiftly; they existed for just a few million years—not billions of years, like many contemporary stars—before blowing themselves to smithereens. A notable point of contention is that Population III stars are theoretical. Even the almighty James Webb Space Telescope, which can see farther out in space—and further back in time—than any other observatory, has yet to see any clear evidence (direct or indirect) of a Population III star. Perhaps one day it will. Perhaps it won’t. But the astronomical community suspects that these primordial stars, or something very similar to them, do exist at cosmic dawn. This means that, as they try to hunt them down, astrophysicists enjoy using computers to simulate their births and deaths—and what the consequences of this life cycle may be. This recent study, which does just that, studied two theoretical Population III stars: one 13 times as massive as the sun, and one 200 times as massive. The smaller star burned for just 12.2 million years, while the gigantic one persisted for just 2.6 million years, explains Daniel Whalen, a cosmologist at the University of Portsmouth in England and one of the new study’s authors. Both ended their lives spectacularly, via two slightly different types of supernova. A hail of blinding light was followed by a halo of debris rocketing out in all directions. At first, both halos were remarkably hot—too hot for the oxygen and the hydrogen to mix. “Gas needs to be cooled down first before water can form,” says Latif. Instead, all this matter spent several million years flying out into the darkness. But after a while—two million to three million years for the gigantic star’s supernova, and 30 million years for the smaller supernova—the debris halo became sufficiently chilled. The halo’s outward expansion experienced some turbulence, creating swirls that gathered mass, creating gravitational traps that drew in even more mass over time. The oxygen and hydrogen in those dense, cold traps were then able to bond—and water began to precipitate. If all the water from the smaller supernova were weighed, it would be equivalent to one-third of the Earth’s total mass. The gigantic supernova, which ejected far more hydrogen and oxygen, created a staggering 330 Earth-masses worth of water. These simulations—whose stills represent resplendent, van Gogh-like works-in-progress—are elegant. “The results are not surprising; in fact, they are to be expected. As soon as Pop III supernovae give you heavy elements, all sorts of molecules start to form in cool dense gas,” says Norman. Making multiple worlds’ worth of water would have been incredibly easy for these fast and furious stars. Plenty of uncertainty remains, though. The typical mass of a Population III star is not yet known, which would affect their ability to manufacture water. And, lest we forget, nobody has yet scoped a Population III star. “Simulations that make predictions without having any observations to benchmark the models against are always difficult to fully trust. Slight tweaks to the implementation of the model could give you very different results,” says Renske Smit, an astrophysicist at England’s Liverpool John Moores University who was not involved with the new research. “That being said, we know that dust forms very rapidly from observations around 800 million years after the Big Bang, so it’s not difficult to believe water could form very early as well.” In other words: This result is big, if true. But if it is true, the consequences for the cosmos could be remarkable. These primordial stars didn’t just create a lot of water; they also released a lot of silicon, which binds with oxygen to form a very commonplace rock. In another study—currently a preprint awaiting peer review—by the same team, models show that, just over 200 million years after the Big Bang, in the ruins of the very first stars, planets were piecing themselves together around a second generation of stellar furnaces. And those planets had access to plenty of fresh water—water that had several routes to reach them, from comet and asteroid impacts to icy dust being imprisoned within the planets as they were being built. Just think about that for a moment. Just a few heartbeats after the beginning of everything, of both space and time, there may have been water worlds gliding around, long before there were even enough stars to form galaxies. If life took root on those oceanic worlds, and it were able to gaze upward, it would have seen a night sky staggeringly different from our own diamantine vista. None of those primeval planets exists today. Eventually, their own stars would have died, immolating or jettisoning them in the process. Much of the water forged by those original supernovas would have been broken down and destroyed, split into its constituent atoms. And each subsequent generation of planets, and stars, would have their own water recycled from the seas of their ancestors. There is, however, a possibility that some of the very first water ever made, by those impossibly ancient Population III stars, is still around today. Some may be floating out in the middle of nowhere. Some may be swept up in the creation of far-flung planets. Not too long ago, I was outside, it was raining, and several droplets fell on my hand and trickled across my wedding ring. At that moment, a humbling thought popped into my mind. I bought that tantalum ring in 2024. That tantalum fell from space 4.6 billion years ago, along with much of Earth’s water. Those raindrops were fresh—but maybe, just maybe, a single drop contained one solitary molecule of water that was formed in the explosive final moments of a star that lived 13.6 billion years ago. Who knows? Perhaps the next time you’re out in the rain, the memory of a star from cosmic dawn will fall on you, too. Source of the article

GOATReads: Philosophy

The last letter

Condemned to death by firing squad, French resistance fighters put pen to paper. Their dying words can teach us how to live On a wintry day in Bordeaux, France, I took refuge from the rain inside a cosy bookshop stacked to the ceiling with books. Place Gambetta, Bordeaux’s iconic square framed with majestic 18th-century limestone façades, was under construction. ‘It’s always like this,’ the owner told me with a disparaging glare. I was not sure if the comment was directed at the rain or the construction. Inside, I browsed the shelves, soaking in the titles one by one. A book cast among thousands caught my eye: La vie à en mourir: lettres de fusillés (2003). It contained farewell letters of those shot by Nazi firing squads during the German occupation of France in the Second World War. I picked it up, opening the pages slowly and carefully as if I held in my hands a fragile treasure, like ‘this butterfly wing’ which the 19-year-old Robert Busillet, executed for his role in an intelligence-gathering and sabotage network, bequeathed to his mother ‘en souvenir de moi’, to remember him by. I flitted through the pages, reading flashes of a letter here, longer passages there. As someone who studies war, I am no stranger to the theme of killing and dying. But this experience was different. Last letters are unlike any other type of writing I have ever encountered. They are of a singular ilk because they peer into the souls of those confronting imminent and inescapable death. Different from everyday letters, diaries, memoirs, political tracts or philosophical treatises, because of the urgency that shapes the act of writing. The authors know there will not be another chance to say what must be said. Each last letter is uniquely personal, yet there is a universal feel to them, almost as if they paint a naked portrait of the human condition. To read them incarnates the phrase penned by Michel de Montaigne. ‘If I were a maker of books,’ he wrote in the 16th century, ‘I would make a register, with comments, of various deaths. He who would teach men to die would teach them to live.’ Dawn breaks on your final morn. A prison guard hands you a blank sheet of paper and a pen two hours before your execution by Nazi firing squad. The customs and traditions of the time – sometimes, but not always, respected by the Nazi authorities – permit the condemned a final act of communication: the last letter. To whom do you write? What do you say, knowing this is the last chance to say it? It’s not just the heroic resistors whom the Nazis executed. One could be killed for far less. In the autumn of 1941, the Militärbefehlshaber in Frankreich – the military commander who controlled Paris – enacted the ‘hostage code’, whereby all those in a state of incarceration are considered to be political hostages. In the event of a ‘terrorist attack’ – an act of armed resistance against the occupier – these political hostages could be executed in reprisal. In other words, those arrested and imprisoned for, let’s say, writing or distributing illegal tracts and newspapers, protesting in the streets, or even listening to news from forbidden radio sources such as the BBC were, effectively, handed death sentences-in-waiting. I’ve read hundreds of last letters, written by armed resistors and political hostages alike. One day, I sat down to catalogue the ways in which the soon-to-be executed communicated to their loved ones the macabre news. It was an uncomfortable, but deeply moving, task. ‘I can give no longer any further testimony of my affection than this letter,’ began Robert Beck, the head of an active terrorist organisation, according to the Gestapo. ‘Colvert will never again see his Plouf, nor his little Plumette. He is leaving for a big big journey,’ he added, softening the blow for his children. Jacques Baudry, who had resisted the Nazis since his high-school days when he organised protests and marches, later participating in armed attacks against the occupiers, was rather blunter in his letter to his mother: ‘They are going to rip me from this life that you gave me and that I clung to so.’ Huynh Khuong An, a young high-school teacher arrested for possessing anti-fascist propaganda and related clandestine activities, was plucked from the cistern of political hostages one sunny October day. Writing to his lover, he implores: ‘Be courageous, ma chérie. It is no doubt the last time that I write you. Today, I will have lived.’ This turn of phrase, so simple grammatically speaking, is deceptively philosophical because it captures the interval that separates the writer from the reader, the one who will have lived from the one who lives on. Death was no longer on the horizon. The moment was decided, imminent and irrevocable. To read the letters is to take a journey inward, deep into the world of emotions at the very frontier of living and dying. In one’s final moments, superficiality cuts away, revealing something meaningful and deep about the human condition. From Montaigne: In everything else there may be sham: the fine reasonings of philosophy may be a mere pose in us; or else our trials, by not testing us to the quick, give us a chance to keep our face always composed. But in the last scene, between death and ourselves, there is no more pretending; we must talk plain French, we must show what there is that is good and clean at the bottom of the pot. The last letters communicate what this something, at the bottom of the pot, is. One of the most powerful theories to explain how humans face up to their own mortality was hypothesised by the American psychiatrist Elisabeth Kübler-Ross in her groundbreaking book On Death and Dying (1969). When an individual learns of their impending death, they navigate among five stages of grieving, trying to come to terms with their own mortality: denial, anger, bargaining, depression and acceptance. Kübler-Ross observed terminally ill patients with a limited time horizon. For those killed by the Nazis, that interval was often condensed to the time allotted to write a letter. The last letters offer a raw portrait of grieving one’s own demise. Few of the condemned deny their fate. Some remain entrenched at the phase of depression. Others skip a phase, or oscillate between anger and acceptance, acceptance and depression. A surprising number traversed all the phases. And almost everyone bargains. Bargaining means asking the question: what would I do, if only I had more time? Montaigne would have us focus on the passages related to bargaining because these, by showing us what is at the bottom of the proverbial pot, teach us to live. If the last letters are any proof, the adage that your life passes before your eyes has some truth to it. It’s not the classic image of an entire lifetime; it’s more like watching old movie reels of favourite moments. ‘I do not feel the need to sleep,’ explains Arthur Loucheux – a well-known anti-militarist and leader of a miners’ strike – to his brother at 2 am of his final night, ‘not out of fear, but to remember my life, because to sleep, bah! won’t I have time [to do so] very soon?’ Tony Bloncourt, or ‘petit Toto’, who was part of a youth battalion and partook in armed resistance, recounts to his parents: ‘My entire past comes to me in a flash of images.’ A life of 21 years. Was he thinking, as he wrote, of the years he would not live to see? As I read their words, I’m hit by a flash from my own past. It’s a story I often tell my students who are planning to study abroad because it depicts a quintessential encounter between me, a culture and its language. It’s a story about the little details that convey so much about local history, hiding in plain sight. There is a last-letter link, too, though at the time I did not know it. I was on my way to a lunch, navigating still-unfamiliar streets to my destination, at the crossroads of rue de Vouillé and rue Georges-Pitard in the quaint 15th arrondissement of Paris. The names meant nothing to me back then. I was oblivious to the stories that marked the public spaces I transited and inhabited. I had just arrived in France and was still learning the language. To help me practise my grammar skills, someone had the bright idea to impose a very peculiar rule: every spoken sentence had to employ the subjunctive in some way or another. Any basic French grammar book will tell you the subjunctive is used to indicate some sort of subjectivity, or uncertainty, in the mind of the speaker. Feelings of doubt and desire, as well as expressions of necessity, possibility and judgment. The subjunctive inhabits many last letters. Georges Pitard’s letter to his wife, Lienne, begins with the subjunctive, used as an expression of necessity: ‘It is necessary for you to be extremely courageous, because this time misfortune is upon us; it flashed like lightning and it strikes us.’ Pitard, I would eventually learn, was a lawyer who defended those unjustly imprisoned at the beginning of the occupation and was arrested for it. A man of principle: ‘I only did good, thought of easing misery,’ he wrote in his last letter to his wife before being executed as a political hostage. ‘But for some time now the elements are raging and everything conspires against men like me.’ Knowing these details adds a layer of meaning to my memory and its resonance, with the last scene playing out again and again each time I tell the story. Pitard’s final words always the same. We can imagine a 40-something Pitard in his cell writing these words as time inexorably ticks and tocks. He seems to regret that ‘we quarrelled a few times, hurt each other for trifles’. As the execution looms ever closer, he bargains with time. Remembering the past, perhaps in shock that tomorrow will not be just like yesterday, he writes: ‘This evening, I think of your sweetness, your kindness, of our sweet moments, those from long ago and those of yesterday, know well, my darling, one could not love you more than I did.’ He seeks one final escape from the fate that awaits him, in a place where everything is pure love, where nothing else exists except dreams of her: ‘And I will fall asleep with your sweet image in my eyes and the taste of our last kisses that are not that distant, my sweet friend, my gentle little Lienne. Be sensible … Be reasonable. Love me, for a long time yet.’ The subjunctive again. Expressions of desire and longing. When time seems like an infinite plain before us, we take the days ahead for granted. There will always be time to do the things that matter most. Too often, maybe, these are a small part of a bigger canvas often dominated by other priorities. Time duly runs its course, and the letter comes to an end, but not before ‘Geo’ adds a postscript: ‘I kiss passionately your photograph and press it to my heart, the first [photo] of our youth, and the one from Luchon in which you are wearing flowers.’ I imagine him in the dark of night, pressing his lips to the photo. Reliving the memories. When Lienne reads his letter, Georges will have lived. Despite the raw emotion of the last letters, it’s hard to imagine that the elements, raging, will conspire against me. Psychologically, as humans, we flee from the idea of the world carrying on without us. We push the fact of dying deep into our subconscious. Instead, we take comfort in the naive belief that tomorrow will be like yesterday, and so on, and so forth. Such is the power of denial. I remember the exact moment when the façade of denial began to crumble. To plunge deeper into the ambiance of the dark years of the Nazi occupation, I searched out other writings from the time. I found a copy of La patrie se fait tous les jours, an anthology of texts from the French intellectual resistance. It was a first edition. The pages were crisp, still uncut, as if the book had just come off the printing press. Except it had been published in 1947, less than three years after France was liberated. To leaf through the pages required, first, slicing them apart. The same movement one makes to open a letter, it turns out. It was a slow and meticulous process. I dutifully opened them, lingering to read a poem by the resistance poet Paul Éluard – ‘Liberté’ (1942) – until I arrived at page 111. There, as I carefully opened the next few pages to reveal the last letter of Daniel Decourdemanche (known by the pseudonym of Jacques Decour) – a French professor of German literature in his 30s, living in Paris – something happened. Psychologically, it was like the floor fell out from under me, plummeting me into the tumult of the times. Decourdemanche was part of the intellectual resistance. His crime, which led to his May date with a Nazi firing squad, was to organise and distribute underground magazines, whose purpose was to rally intellectuals to the anti-fascist cause, and to inject some humanism into news cycles gorged with nationalist and divisive propaganda. In his last letter, tempted to imagine what might have been had he had more time, Decourdemanche writes to his parents: ‘I dreamt a great deal, this last while, about the wonderful meals we would have when I was freed.’ But he accepts these experiences will not include him: ‘You will have them without me, with family, but not in sadness.’ Instead of regret, his mind drifts to the meaningful experiences he did live: ‘I relived … all my travels, all my experiences, all my meals.’ And at the end: ‘It is 8 am, it will be time to leave. I ate, smoked, drank some coffee. I do not see any more business to settle.’ I sat there, moved but immobile, staring at these last lines, then at his signature. ‘Votre Daniel’, your Daniel. I had the strange impression of looking in a mirror, of staring death in the face. Another Daniel, also a humanist in a world of inhumanity and ruthless self-serving politics. Reading his words, I drift across the thin frontier separating the past from a parallel world. In reading how he and others confronted death, in bearing witness to their fears, hopes, joys and regrets, I am instinctively transported to an analogous moment. To whom would I write? What would I say? Am I ready to die? What would I bargain for? That’s what the last letters do, they open this frontier and beckon us to cross. Montaigne counsels his readers to come to terms with death by learning to no longer fear it. This has a liberating effect, according to the old sage, because it allows us to be more in tune with ourself while we are among the living. The trick is to cultivate what is at the bottom of the pot long before the final act. Reading the last letters allows us to play such a trick on time. For we, the readers, are still in the world of the living. We are not yet part of those who, when the ink dries on the page and it is read by loved ones later, will have lived. Maybe we do not know what, when the time comes, we might bargain for. But the last letters tell us what those on the other side of life wanted, what they bargained for, at death’s door. The verdict had fallen. Forty-one-year-old André Cholet, condemned to death for running the radio counter-espionage wing of a major resistance group, had just seen his wife for the last time. He recounts the scene in his last letter: I still have the time to talk to you ma petite, as if you were still here close to me, on the other side of the wire mesh. For this last day you were beautiful like you had never been before and oh what grief is now yours. I would like to be in this moment still. Bargaining to be there in that instant. To see her eyes, her smile. To smile back. To soak up all the non-verbal gestures that define a person, a loved one, her. To blow a kiss. How seldom do we remark these moments in normal times? They seem unremarkable when lived day to day, but in the last scene, between death and oneself, the emotions, hopes and regrets that comprise the human condition are heightened a thousandfold. What if we were attuned in such a way that daily encounters with loved ones were heightened a thousandfold? Or even just tenfold? Bargaining is the bedfellow of regret. Twenty-one-year-old Roger Pironneau was not sorry for the espionage that led to his arrest. He does not regret resisting. But, writing to his parents, he is sorry ‘for the suffering I caused you, the suffering I am causing you, and that which I will cause you. Sorry to everyone for the evil that I did …’ And he is sorry ‘for all the good that I did not do’. I imagine his mind wandering – let’s be clear, even though there is no chance, no illusion, of actually having more time, it wanders toward a question we readers can still pose: if only I had had more time, what good could I have done? Last letters are finite. They contain the words that fit the page allotted, and no more. What is not written remains unsaid. Arrested for acts of sabotage and other clandestine activities, Maurice Lasserre composes his last letter to his wife, Margot. He signs his name one last time, with the unique characteristic furls that make his signature his. There is just enough space for a final PS: ‘I close the envelope by cherishing you and kissing you for the last time, again good kisses. I send you my wedding ring and a lock of hair that you will keep in memory of me …’ As he folds the letter to place it in an envelope, something unexpected happens. ‘They are giving me more paper,’ he notes below his signature, before continuing on a fresh page. ‘I take advantage to write to you again and to kiss you still once more …’ One more gesture of love. ‘And the little ones, and the older ones, too.’ Lasserre writes on. A message for each of his children. And one more thought destined for Margot: ‘Still more kisses and think that I am yours, even in face of the death that is coming.’ Another sheet of paper is like a new day, though if we thought it might be the last, perhaps our perception of the most ordinary of gestures would change. Bargaining exposes the raw core of what gives meaning to the everyday gestures. When we are young, we think there will be an infinite number of blank pages upon which to write our story. Twenty-something Claude Lalet found himself, the morning of his last day in the world of the living, writing to his new bride. Sure, he was active in various protests, which led to his arrest. But it was never supposed to end like this, being executed as a political hostage in reprisal for the assassination of a German officer by the armed resistance. In the back of a truck on the way to the quarry where he is to be executed, he composes himself: ‘Already the last letter, and already I have to leave you!’ The repetition of the word ‘already’ betrays his anger; it’s simply not fair, his fate. But Lalet does not want to dwell in anger. Focusing on the beauty around him, he observes in poignant prose: ‘Oh the road is beautiful, ah, truly!’ As the truck rumbles forward and reality sinks in deeper, he battles to keep his bitterness at bay. What was it that made life so wonderful? ‘I know I must clench my teeth. Life was so beautiful; but let us hold on to, yes hold on to our laughs and our songs …’ Lalet has every reason to be bitter, but the final lines of his last letter suggest that, deep down, he realises that anger, however valid, is empty sustenance: ‘Courage, joy; immense joy … I love you always, constantly. I kiss you, I hug you with all my strength. Long live life! Long live joy and love.’ All those whose letters are cited above died at the hands of the authoritarian state. They came from all walks of life and diverse political backgrounds. Some took up arms to fight back, while others resisted non-violently, or were simply caught up in the repressive nets of the state. I reread their last letters in parallel to the newsfeeds that, every day, bring ubiquitous headlines stirring nationalistic and xenophobic sentiments. Even if I cannot quite wrap my head around the absurdity of being in a position of writing my last letter, there is foreboding in the air. Instinctively, I look for parallels in the past, drifting back across that frontier the last letters have opened to me. Daniel Decourdemanche wrote in his diary in 1938 on the eve of the infamous Munich Agreement: One prepares oneself, one ponders about what is to come, about what must kill us without our being able to have a gesture of defence, but it will maybe take a long time, like all incurable maladies. Waiting so long for the inevitable, this is the test. The diary entry is a prescient bookend to his last letter penned in 1942, before he was executed in the glade at the sinister Mont-Valérien fortress on the outskirts of Paris. As he watched the forces of history unfold, Decourdemanche was no doubt thinking of the possibility of his own death – a life cut short by the tumult of the times. ‘How to find your way around?’ he asks, in a world in which humanism is a bad word, where vitriol is the coin of the realm. Where the dykes of civility and tolerance that once kept fanaticism at bay have burst. Where there is power in hating the other, in calling the other names, in blaming the other for all our problems. As if doing so acts as a shield against whatever may come. ‘The strong who face this test,’ he proffers, ‘are not those we expect.’ Falling in step, toeing the line of intolerance, embracing the newly emboldened toxic masculinity? No. ‘The strong,’ Decourdemanche surmises, ‘are those who loved love more than everything else.’ ‘It is the right time for us to remember love,’ he tells himself. ‘Have we loved enough,’ he asks? ‘Have we spent several hours a day marvelling at others, being happy together, feeling the price of contact, the weight and value of hands, eyes, the body? Do we still know how to devote ourselves to tenderness?’ These are formidable questions. Once you realise that your days are numbered, that other emotions are competing for time and space in your life, answering them offers a chance to reorient yourself amid all the noise and contempt: ‘It is time, before disappearing in the trembling of an Earth without hope, to be entirely and definitely love, tenderness, friendship, because there is nothing else. One must swear to only care about loving, to love, to open your soul and hands, to look with the best of your eyes, to hold what you love close to you, to march without anguish, radiating tenderness.’ Back in the 21st century, this Daniel wonders how many people around him are having the same existential thoughts. Would it make a difference if everyone confronted their own mortality in earnest? Thinking of the bottom of the proverbial Montaignian pot amid the constant brouhaha, the rhetoric, the posturing and pretence of a world clutching at madness, I ask myself the question that those who can still bargain for time should ask: how might I live my life differently? Source of the article

Human Computers: The Early Women of NASA

These ground-breaking female mathematicians, engineers and scientists produced calculations crucial to the success of NASA's early space missions. Barbara “Barby” Canright joined California’s Jet Propulsion Laboratory in 1939. As the first female “human computer,” her job was to calculate anything from how many rockets were needed to make a plane airborne to what kind of rocket propellants were needed to propel a spacecraft. These calculations were done by hand, with pencil and graph paper, often taking more than a week to complete and filling up six to eight notebooks with data and formulas. After the attack on Pearl Harbor, her work, along with that of her mostly male teammates, took on a new meaning—the army needed to lift a 14,000-pound bomber into the air. She was responsible for determining the thrust-to-weight ratio and comparing the performance of engines under various conditions. Given the amount of work, more “computers” were hired, including three women Melba Nea, Virginia Prettyman and Macie Roberts. Macie Roberts was about 20 years older than the other computers working at JPL. Coming to engineering later in life, she was meticulous and driven, rising through the ranks and becoming a supervisor in 1942. When tasked with building out her team, she made the decision to hire only women, believing men would undermine the cohesion of the group and not take direction well from a woman. Roberts set a precedent for future female supervisors who made it their job to hire women, often taking a chance on young women right out of college. Helen Ling was one such supervisor who followed in Roberts’ footsteps. Ling actively hired women who didn’t have an engineering education, encouraging them to attend night school. At a time when maternity leave did not exist, pregnancy could be detrimental to a woman’s career. Way ahead of her time, Ling offered her employees her own version of unpaid maternity leave, rehiring them after they had left to give birth. Barbara Paulson began working at JPL in 1948 when calculating a rocket path took all day. On January 31, 1958, she played a role in the historic launch of the JPL-built Explorer 1, the first successfully launched satellite by the United States. She was tasked with plotting the data received from the satellite and a network tracking station. It was Paulson and her fellow human computers that hand-charted America’s entrance into the Space Race. Paulson left JPL to have her first daughter, and thanks to Ling’s unofficial unpaid maternity leave, returned in 1961. In the 1950s, NASA was starting to work with what we now know as computers—but most male engineers and scientists did not trust these machines, believing them to be unreliable in comparison to human calculations. Dismissing computer programming as “women’s work,” the men gave the new IBMs to the women of JPL, providing them with a unique opportunity to work with and learn to code, computers. It comes as no surprise then that the first computer programmers in the JPL lab were women. They became attached to a specific IBM 1620, nicknaming her CORA and providing her with her own office. After graduating in 1953 with a degree in chemical engineering from the University of California, Los Angeles, Janez Lawson had the grades, degree and intelligence to get any job she wanted. The problem? Her race and gender. She responded to a JPL job ad for “Computers Wanted” that specified “no degree necessary,” which she recognized as code for “women can apply.” While it would not be an engineering position, it would put her in a lab. Macie Roberts and Helen Ling were already working at JPL, actively recruiting young women to compute data and Lawson fit the bill. Lawson was the first African American to work in a technical position in the JPL lab. Taking advantage of the IBM computers at their disposal, and her supervisor’s encouragement to continue her education, Lawson was one of two people sent to a special IBM training school to learn how to operate and program the computers. A remarkable group of African American women, working at what would become NASA’s Langley Research Center in Virginia, were breaking down their own gender and racial barriers. Dorothy Vaughan joined the team in 1943. Already having to ride in the colored section of a segregated bus, she was put to work in the “colored” computers section. In 1951, Vaughan became the first African American manager at Langley and started, like her cohorts on the West coast, to hire women. That same year, Mary Jackson joined her team, working on the supersonic pressure tunnel project that tested data from wind tunnel and flight experiments. Katherine Johnson—who was awarded the Presidential Medal of Freedom in 2015 by President Barack Obama—joined the team at Langley in 1953. A physicist, space scientist and mathematician, Johnson provided the calculations for Alan Shepherd’s historic first flight into space, John Glenn’s ground-breaking orbit of the earth and the trajectory for Apollo 11’s moon landing. One of the earliest human computers still works at JPL. Now 80 and NASA’s longest-serving female employee, Sue Finley was originally hired in 1958 to work on trajectory computations for rocket launches and is now a software tester and subsystem engineer. She is currently working on NASA’s mission to Jupiter. Her legacy, and that of the other early human computers, is literally written in the stars. It was the careful and precise hand-made calculations of these women that sent Voyager to explore the solar system wrote the C and C++ programs that launched the first Mars rover and helped the U.S. put a man on the moon. Though rarely seen in the famous photos of NASA’s mission control, these early human computers contributed immeasurably to the success of the United States space program. Source of the article