CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

How Old-Time Fiddle Music Took Root in Indigenous Alaska

In Fairbanks, fiddling thrives—bridging cultures, sustaining traditions and filling the dance floor with life The moment you step into the dance hall, the sound envelops you. A fiddle erupts; an electric guitar wails; a keyboard drives out steady chords. Under colorful paper decorations dangling from the ceiling, couples two-step across a worn dance floor, while elders nod to the beat from folding chairs. The unwavering pull of the music unites everyone in the room. Close your eyes and you might think you were somewhere in Appalachia, where fiddling is a bedrock tradition. But this is interior Alaska, thousands of miles away. For more than four decades, the Athabascan Fiddle Festival has filled community halls in Fairbanks with a sound that is both global and distinctly Native, a blend of Irish, Scottish and French reels layered with the cadence of the boreal forest and the Yukon River. “In the early days, the trappers and miners are the ones that came down the Yukon River, taught the Native people how to play those stringed instruments,” says Ann Fears, general manager of the Athabascan Fiddlers Association, which created the three-day festival attended by nearly 40 bands. “[The Native people] would have their own dances. They just kept playing and getting better. [Growing up], I would watch the people dancing and singing. It was a highly spiritual event for me.” This week, fiddlers, guitarists and singers from villages across Alaska have converged on the Chief David Salmon Tribal Hall in Fairbanks. The festival is equal parts reunion and concert, celebration and, more and more urgently, preservation. Many elders who carried this tradition have passed on, and rising travel costs make it harder for musicians in remote Alaskan villages to reach Fairbanks. And still, each year, the music rises again. Origins along the river Athabascan fiddling traces back nearly two centuries. In the 1840s, Hudson’s Bay Company fur traders made their way down the Yukon River, carrying fiddles and sheet music from Scotland, Ireland and France. Jigs, reels and polkas found new life in Athabascan villages along the water. Athabascan culture stretches across interior Alaska, a landmass larger than California, encompassing dozens of rural Native villages. The fiddle serves as a community connection. Many Athabascan fiddlers learned by ear, reshaping melodies into their own rhythms and structures. The character of the music evolved with the geography of the river. The upriver fiddling style, rooted in Gwich’in communities, remained lean and rhythmic, often featuring solo or twin fiddles, accompanied by guitar. It drove square dances, jigs and reels. The downriver style, associated with Tanana and Koyukon Athabascans, absorbed outside influences during the Klondike and Nome gold rushes. This style embraced larger ensembles, adding piano and vocals, favoring slower tempos and wide repertoires well suited to community halls. For generations, these styles stayed separate, divided by distance. With the founding of the Athabascan Fiddle Festival in 1983, upriver and downriver styles began to co-mingle, creating a shared space where musical traditions blended and took on new life on a single stage. A living tradition The annual festival unfolds over three days each November. The music runs nearly nonstop, from noon to midnight and into the early hours of the morning. School groups arrive in the afternoons. “They learn how to two-step, and they learn some of the dances from the elders,” says Fears. “There’s a lot of growing and teaching and having fun throughout the day.” The event is family-centered and alcohol-free, a point of pride for organizers. “Alcohol kind of ruined families—it tore them apart,” Fears explains. “Families can bring their children, and they can be safe at this community event.” For those unable to make the trip to Fairbanks, the Athabascan Fiddlers Association broadcasts the entire festival on KRFF 89.1 Voice of Denali radio, carrying the music back into the villages that gave it life. Passing the torch Keeping this tradition alive requires more than one festival a year. It requires teaching. Across interior Alaska, programs connect youth and elders through music. The Dancing With the Spirit project, founded by Chief Trimble Gilbert, travels into remote tribal communities, making fiddling more accessible. A master Gwich’in fiddler and spiritual leader, Gilbert received a National Heritage Fellowship in 2024 from the National Endowment for the Arts. His vision is simple: put fiddles and guitars into children’s hands, with elders by their side, so children learn not only the music but the culture. Their teaching method consists of a color-coded system of dots on the strings, explains Belle Mickelson, the program’s director. “In the very first hour, the kids can play the four main chords,” she says. Over the past two years, Dancing With the Spirit has conducted more than 50 weeklong music camps in remote villages. For students who show promise, the program may leave behind instruments. During the Covid-19 pandemic, the organization created video tutorials and distributed flash drives loaded with videos and lessons for villages without reliable internet. “There are elders in the classroom, and I really feel like the culture that we bring is even more important than the music,” says Mickelson, who began playing the fiddle herself at age 10. “You know, connecting them with these elders.” In Fairbanks, Young Native Fiddlers provides another anchor. Meeting on Saturdays during the school year, the group teaches children and teens both fiddle and guitar. For some, it’s their first structured lesson. For others, it’s a way back to the traditions of their grandparents. Musical masters The festival’s stage showcases musicians who keep the tradition alive while pushing it forward. One of the best-known is Angela Oudean, a bluegrass fiddler who grew up in Anchorage. By age 16, she co-founded Bearfoot, a successful Americana band. Today, she is among the festival’s most in-demand performers, hired year after year to back up bands in need of a fiddler. “I get to hang in there and play solos and do my best to back them up,” she says. “It’s exciting. It can be kind of an adrenaline rush. You don’t know what’s going to happen. You just really have to be on your toes the whole time.” She thrives on the improvisation that the festival demands. “Everybody is just coming from all different places and meeting up and just sharing music,” Oudean says. “I’m really lucky: I get to play with lots of bands, all from different places, with different styles.” Beyond the stage, she carries that passion into the classroom. She has taught across Alaska, traveling to more than a dozen villages where fiddling might otherwise fade. Another key player is Marc Brown, a Koyukon Athabascan guitarist, singer and bandleader of Marc Brown & the Blues Crew. Born in the small interior Alaska town of Huslia, Brown has played the festival since its second year. Known for his guitar-fiddle duels with Oudean, he bridges genres, moving from blues and Americana to gospel and country. “My great-grandfather, Sammy Sam, was a really talented fiddler, one of the best fiddlers I’ve ever heard,” says Brown. “A lot of his sons played, including my grandfather. [But], even as a 4-year-old, I remember wanting to play guitar.” Today, Brown sees himself as an accidental bridge. “I can play the old style, but I can [also] play newer rock to back up the younger guys [on fiddle],” he says. Deeper musical meaning Fiddle music in Alaska is functional music that must be in motion. Jigs, waltzes, two-steps and square dances fill the hall nightly, with four generations often dancing side by side. Favorite festival dances like the Rabbit Dance and the Duck Dance are woven into every gathering, especially cherished by young people. The dances not only pass culture to the next generation but also offer healing and joy for adults. “When I’m dancing, I’m in my own world,” says Fears. “I look around and see people smiling.” Some songs have become festival mainstays, their stories as enduring as their melodies. “Eagle Island Blues,” composed in the 1940s by Tom Patsy while plodding in snowshoes to Nulato, was born of sorrow, as he realized he would miss a Christmas dance with his beloved. Another festival favorite, “Indian Rock ’n’ Roll,” blends traditional fiddling with a rhythmic backbeat. Its popularity reflects the musical genre’s openness to new sounds while holding fast to its roots. Looking forward As the clock ticks past midnight, the hall is still alive. Toddlers doze in their parents’ arms, but the music and dancing continue. Onstage, a fiddler launches into a reel, the guitarist follows, and the keyboard strikes a fresh chord. The crowd surges once more. Oudean is hopeful. “I think the younger fiddlers are continuing the tradition, learning the old songs,” she says. “Fiddling is one of those things that is pretty traditional, [so] you learn the old songs.” That said, old-time fiddling is still evolving. “It’s changing because the younger ones are choosing different instruments,” says Fears. “These younger ones are more music-savvy. They mix their own music now, and they learn how to play the different stringed instruments.” For those who attend, one thing is for sure: Old-time fiddle music is no relic. It is a living, breathing force, carrying history, resilience and joy into the future, just as it has since the first fiddles floated down the Yukon River nearly 200 years ago. Source of the article

Just a pale blue dot

When we see the Earth as ‘a mote of dust suspended in a sunbeam’ what do we learn about human significance? On St Valentine’s Day 1990, NASA’s engineers directed the space-probe Voyager 1 – at the time, 6 billion kilometres (3.7 billion miles) from home – to take a photograph of Earth. Pale Blue Dot (as the image is known) represents our planet as a barely perceptible dot serendipitously highlighted by a ray of sunlight transecting the inky-black of space – a ‘mote of dust suspended in a sunbeam’, as Carl Sagan famously put it. But to find that mote of dust, you need to know where to look. Spotting its location is so difficult that many reproductions of the image provide viewers with a helpful arrow or hint (eg, ‘Earth is the blueish-white speck almost halfway up the rightmost band of light’). Even with the arrow and the hints, I had trouble locating Earth when I first saw Pale Blue Dot – it was obscured by the smallest of smudges on my laptop screen. The striking thing, of course, is that Pale Blue Dot is, astronomically speaking, a close-up. Were a comparable image to be taken from any one of the other planetary systems in the Milky Way, itself one of between 200 billion to 2 trillion galaxies in the cosmos, then we wouldn’t have appeared even as a mote of dust – we wouldn’t have been captured by the image at all. Pale Blue Dot inspires a range of feelings – wonderment, vulnerability, anxiety. But perhaps the dominant response it elicits is that of cosmic insignificance. The image seems to capture in concrete form the fact that we don’t really matter. Look at Pale Blue Dot for 30 seconds and consider the crowning achievements of humanity – the Taj Mahal, the navigational exploits of the early Polynesians, the paintings of Georgia O’Keeffe, the inventions of Leonardo da Vinci, John Coltrane’s A Love Supreme, Cantor’s theorem, the discovery of DNA, and on and on and on. Nothing we do – nothing we could ever do – seems to matter. Pale Blue Dot is to human endeavour what the Death Star’s laser was to Alderaan. What we seem to learn when we look in the cosmic mirror is that we are, ultimately, of no more significance than a mote of dust. Contrast the feelings elicited by Pale Blue Dot with those elicited by Earthrise, the first image of Earth taken from space. Shot by the astronaut William Anders during the Apollo 8 mission in 1968, Earthrise depicts the planet as a swirl of blue, white and brown, a fertile haven in contrast to the barren moonscape that dominates the foreground of the image. Inspiring awe, reverence and concern for the planet’s health, the photographer Galen Rowell described it as perhaps the ‘most influential environmental photograph ever taken’. Pale Blue Dot is a much more ambivalent image. It speaks not to Earth’s fecundity and life-supporting powers, but to its – and, by extension, our – insignificance in the vastness of space. But what, exactly, should we make of Pale Blue Dot? Does it really teach us something profound about ourselves and our place in the cosmic order? Or are the feelings of insignificance that it engenders a kind of cognitive illusion – no more trustworthy than the brief shiver of fear you might feel on spotting a plastic snake? To answer that question, we need to ask why Pale Blue Dot generates feelings of cosmic insignificance. One account of the feelings elicited by Pale Blue Dot begins in the 17th century, with the French scientist and philosopher Blaise Pascal. Pascal was born in 1623, a mere 14 years after Galileo directed the first telescope heavenwards. Galileo’s observations not only confirmed Copernicus’s heliocentric conception of the solar system and revealed ‘imperfections’ in the celestial bodies (such as the Moon’s craters and mountains), they also revealed countless stars invisible to the naked eye. It was a moment of profound upheaval for humanity’s self-understanding, and many of the reflections recorded in Pascal’s Pensées – a series of notebook jottings published only after Pascal’s death – seem to have been prompted by the new astronomy: When I consider the short span of my life absorbed into the preceding and subsequent eternity … the small space which I fill and even can see, swallowed up in the infinite immensity of spaces of which I know nothing and which know nothing of me, I am terrified, and surprised to find myself here rather than there, for there is no reason why it should be here rather than there, why now rather than then. Who put me here? On whose orders and on whose decision have this place and time been allotted to me? (from the Honor Levi translation of Pascal’s Pensées, 1995) But it is this line from the Pensées – ‘The eternal silence of these infinite spaces terrifies me’ – that perhaps best captures the feeling of cosmic insignificance. Indeed, the line could well serve as a caption for the Pale Blue Dot. For Pascal, the night sky wasn’t merely awe-inspiring – it was terrifying. And it was terrifying not (just) because it was infinite, but because it was ‘silent’. Pascal doesn’t tell us what he meant by the silence of space, but there is reason to suspect that at least part of the answer is theological. The cosy, well-ordered universe of the Middle Ages had been replaced by a universe that was not only vastly bigger but seemed to be ruled by chance and contingency. ‘Who put me here?’ Pascal asks. ‘Perhaps no-one,’ one can almost hear him answer. The silence of space is the silence of the Universe in response to the question of God. It is Pascal’s terror of space that has reverberated down the ages Of course, Pascal himself was no atheist, and there are passages in the Pensées that suggest a very different attitude to the vastness of space: So let us contemplate the whole of nature in its full and mighty majesty, let us disregard the humble objects around us, let us look at this scintillating light, placed like an eternal lamp to illuminate the universe. Let the earth appear a pinpoint to us beside the vast arc this star describes, and let us be dumbfounded that this vast arc is itself only a delicate pinpoint in comparison with the arc encompassed by the stars tracing circles in the firmament. Pascal goes on to suggest that the very fact that our imagination ‘loses itself’ in the face of such thoughts is itself ‘the greatest perceivable sign of God’s overwhelming power’. But it is Pascal’s terror of space that has reverberated down the ages. One can hear its echo in Joseph Conrad’s novel Chance (1913), in which the narrator describes ‘one of those dewy, clear, starry nights, oppressing our spirit, crushing our pride, by the brilliant evidence of the awful loneliness, of the hopeless obscure insignificance of our globe lost in the splendid revelation of a glittering, soulless universe.’ So here is one account of why Pale Blue Dot elicits the feelings it does. It indicates (reminds us?) that we are on our own. The Universe is not the product of divine plan; or at least, if it is, it is not a plan that takes our interests seriously. Let’s suppose – if only for the sake of argument – that this account goes at least some way towards explaining why Pale Blue Dot elicits the feelings that it does. What, then, should we make of those feelings? That, of course, turns on the question of how God’s inexistence would bear on human significance. Some assume that God is required for cosmic significance. Nothing could really matter in a world without God, and if nothing really matters then we don’t matter. If that’s right, then the feelings elicited by Pale Blue Dot wouldn’t be illusory. Instead, they would reveal a profound – and perhaps deeply unsettling – truth: from a cosmic point of view, we really are insignificant. But the idea that significance requires God is deeply puzzling. If the beauty, knowledge and creativity that we see around us don’t really matter in and of themselves, how could the addition of God help? Indeed, it’s surely more plausible to suppose that it’s God’s presence rather than God’s absence that poses the more serious threat to human significance. After all, the beauty, knowledge and creativity that we’ve produced surely pales in comparison with that traditionally ascribed to God. As a 21st-century psalmist might put it, what is the sum total of human knowledge when set against God’s wisdom? What is the beauty of the Taj Mahal, A Love Supreme or the paintings of O’Keeffe when placed against the grandeur of the Horsehead Nebula in the Orion constellation? Theology provides one lens through which to view the sense of cosmic insignificance; accounts of our experience of space provide another. Not the space of astronomy and inter-planetary probes, but the space of ordinary perceptual terrestrial environment. It’s a familiar thought that ordinary modes of experiencing space (and, indeed time) are structured in terms of the human body and its capacities. One sees a door as 10 steps away and the fence as 20 steps away. As we grow and our limbs lengthen, the sense of the space around us also changes. This explains the common experience of visiting one’s childhood home and neighbourhood, and finding them much smaller than expected. Distances that had required 10 steps to traverse as a child can now be crossed in only five; doorframes that once towered high above one’s head can now be reached with ease. The bodily structure of perceptual experience is reflected in our units of measurement. Ancient Mesopotamian builders used the cubit, determined by the length of the forearm from the elbow to the tip of the middle finger. More familiar to us is the foot, a central unit of measurement in Ancient Rome, Greece and China, and based of course on the length of the human foot. Sometimes, of course, we trade the dimensions of the human body for those of other animals. In the world of children’s books, skyscrapers are measured in terms of giraffes (‘the Burj Khalifa is 166 giraffes tall’) and the weight of construction vehicles is given in terms of elephants (‘the Bagger 293 weighs 2,580 elephants’). Giraffes and elephants may not make for good units of scientific measurement, but they do provide young children with a sense of an object’s properties. One wants to ask how far a light year or astronomical unit is in real money Our perceptual faculties enable us to grasp the scale of most built environments, but they are ill-equipped to capture the enormity of nature. To fully appreciate the size of the Grand Canyon, you need to hike down into it – simply looking won’t do the trick. The towering peaks of the Karakoram, the endless sand dunes of the Empty Quarter, the vast glaciers of Antarctica speak to nature’s ability to overwhelm our perceptual capacities. In James Joyce’s novel Ulysses (1920), Buck Mulligan, gazing out over Dublin Bay, refers to ‘the scrotumtightening sea’. But to fully appreciate the limitations of the human body as a scale for nature, you need to cross the Kármán line, the boundary between Earth’s atmosphere and outer space. At its closest, Venus is 38 million kms away. (That’s around 7.6 billion giraffe-lengths.) Neptune, at its furthest, is 4.7 billion kms (around 940 billion giraffes) from us. It is 40,208,000,000,000 km from the Sun to Proxima Centauri, the next-nearest sun to us. Taken with an exposure time of over 11 days, the Hubble Ultra Deep Field captured a region of the night sky smaller than a grain of sand held at arm’s length – and yet it depicts around 10,000 galaxies. These are unimaginably big numbers. We might be able to recite them, but few of us – other than mathematicians and astronomers, perhaps – can truly grasp them. (And note how natural it is to describe understanding in terms of bodily activity – ‘grasping’.) One can, of course, avoid the need for big numbers by trading familiar units for unfamiliar ones – as science does. Instead of measuring the distance to Proxima Centauri in terms of kilometres, astronomers use astronomical units (the average distance between Earth and the Sun – around 150 million km) or light-years (multiples of the distance light travels in a year – around 9.5 trillion km). That gives us a more manageable way of dealing with astronomical distances, but it doesn’t help us to truly grasp the immensity of the cosmos. One wants to ask how far a light year or astronomical unit is in real money. But if this is what explains our feelings of cosmic insignificance – and it’s likely to be part of the story – then it’s not clear why we should pay those feelings any heed. So what if our perceptual experiences cannot accommodate the dimensions of the cosmos? Our perceptual systems have been designed by evolution to help us navigate ordinary terrestrial environments, their job isn’t to track what matters. We might be unnerved by the amount of real estate we occupy, but such feelings provide no insight into our cosmic significance. The two accounts of Pale Blue Dot that we’ve examined are curiously silent about one crucial feature of the image: it is not just an image of the vast emptiness of space, it’s an image of the vast emptiness of space in which we appear. It’s not an image from Earth but an image of Earth. Equally crucially, it’s an image in which Earth – the very object that provides the context for everything that matters to us – is barely perceptible, no more salient than a mote of dust. Salience matters because it’s a signal from our senses: ‘This is significant – pay attention to it.’ From birth, our senses alert us to the presence of features that are important to us – human faces, our mother’s voice, the speech of those around us. The mechanisms that track perceptual salience evolve as we ourselves develop, but they continue to function as sentries, alerting us to what matters. The smell of smoke, loud sirens, sudden movements – these phenomena are all attention-grabbing. It doesn’t matter how engaging your current conversation is, if someone on the other side of the room utters your name, you’ll have a hard time not tuning in and eavesdropping on their conversation. The corollary, of course, is that what isn’t attention-grabbing doesn’t strike us as significant. And Earth as it appears in Pale Blue Dot is pretty much as unobtrusive, as non-attention-grabbing, as overlook-able as it is possible to be. (Indeed, even the hint of perceptual salience – the sunbeam in which Earth is suspended – isn’t a genuine feature of Earth’s position in the cosmos but an artefact of the image itself.) Pale Blue Dot seems to capture the fact that, from a truly objective point of view – the view from ‘nowhere’, as we might put it – we aren’t attention-grabbing. And if we aren’t attention-grabbing (it’s natural to assume), then we’re not genuinely significant. ‘Small’, the image might seem to say, ‘but enormously significant’ But if this account explains why Pale Blue Dot elicits feelings of cosmic insignificance, it also shows why those feelings are not trustworthy. Pale Blue Dot may have been taken from a distance of 6 billion km, but it does not provide a ‘God’s eye’ view of the cosmos. It is, of course, an image, and every image conceals as much as it reveals. Return to the contrast between Pale Blue Dot and Earthrise. Pale Bue Dot reveals something (albeit only a little) of the vastness of the cosmos in which Earth is located; Earthrise conceals this fact. But Earthrise reveals features that are concealed by Pale Blue Dot: Earth’s life-supporting capacities. Neither provide the ‘complete image’ of Earth from outer space – there is no such image. Once we appreciate this fact, we can start to consider new perspectives on the question of cosmic significance. Here’s one. Suppose that Voyager 1 had been equipped with a device designed to detect consciousness-supporting planets. And suppose that the images produced by this device marked the presence of such planets with bright red pixels. Had Voyager 1 directed its ‘consciousness camera’ Earthwards, we would have been as attention-grabbing as the scrape of a chair in a performance of John Cage’s 4’33”. The feelings generated by Bright Red Dot (as we might call this image) would surely be very different from those elicited by Pale Blue Dot. ‘Small’, the image might seem to say, ‘but enormously significant.’ Does that mean we are significant? Maybe not. Suppose that we used our ‘consciousness camera’ to map not just our corner of the solar system but the entire Universe. What kind of image might it produce? One possibility is that Earth would emerge as the sole red dot in a vast expanse of blackness. (‘Nothing like us anywhere,’ we might say to ourselves with justifiable pride.) But the odds of that are surely low – perhaps vanishingly so. Astronomers suggest that there may be as many as 50 quintillion (50,000,000,000,000,000,000) habitable planets in the cosmos. What percentage of those planets actually sustain life? And, of those that sustain life, what percentage sustain conscious life? We don’t know. But let’s suppose that consciousness is found in only one of every billion or so life-supporting planets. Even on that relatively conservative assumption, there may be as many as 50 billion consciousness-supporting planets. Earth, as viewed through our consciousness camera, would be just one more red dot among a vast cloud of such dots. Human creativity might be unmatched on this planet; it may even be without peer in the Orion arm of the Milky Way. But, given the numbers, we’re unlikely to be eye-catching from a cosmic point of view. Source of the article

GOATReads: Philosophy

Who was Duns Scotus?

His name is now the byword for a fool, yet his proof for the existence of God was the most rigorous of the medieval period I am not nearly old enough to remember dunce caps, but I do remember a pedagogical illustration of a sad little boy sitting in the corner of a classroom wearing a pointy hat while his peers gaze joyfully at their teacher. My teacher explained that the pointy hat was called a dunce cap, and was used in olden times to humiliate and so punish the dunces, that is, the students who cannot or will not learn their lessons. Our own lesson was clear: we might not have the pointy hats anymore, but only sorrow and ostracisation await children who do poorly in school. Ironically, John Duns Scotus (c1265-1308), after whom the dunces are named, did very well in school, impressing his Oxford Franciscan colleagues so much that they sent him to the University of Paris. His brilliance at Paris eventually earned him the temporary but prestigious post of Regent Master of Theology. His writings, despite their difficulty, were enormously influential in Western philosophy and theology, so much so that universities all over Europe established Chairs of Scotist thought side by side with Chairs dedicated to Thomism. In the 19th century, the Jesuit poet Gerard Manley Hopkins declared that it is Scotus ‘who of all men most sways my spirits to peace’, and halfway through the 20th century the celebrity monk Thomas Merton could say that Duns Scotus’s proof for God’s existence is the best that has ever been offered. This prestigious legacy notwithstanding, as early as the 16th century educated Englishmen were appropriating ‘Duns’ as a term of abuse. In 1587, the English chronicler Raphael Holinshed wrote that ‘it is grown to be a common prouerbe in derision, to call such a person as is senselesse or without learning a Duns, which is as much as a foole.’ But in the same age a bookish person might also be labelled a dunce: ‘if a person is given to study, they proclayme him a duns,’ John Lyly explains in his Euphues: The Anatomy of Wit (1578). Humanist contempt of scholastic methods and style – of which Scotus’s own tortuous texts sometimes read like a parody – is probably an adequate explanation of the unfortunate union of ‘fool’ and ‘studious’ in ‘dunce’. A person must be a fool to waste time reading John Duns Scotus! Scotus remains a polarising figure, but his humanist detractors would be horrified to learn that here in the 21st century we are witnessing a Scotus revival. Philosophers, theologians and intellectual historians are once again taking Scotus seriously, sometimes in a spirit of admiration and sometimes with passionate derision, but seriously nonetheless. Doubtless this is due in part to the progress of the International Scotistic Commission, which has in recent years completed critical editions of two of Scotus’s monumental works of philosophical theology: Ordinatio and Lectura. As these and other works have become more accessible, Scotus scholarship has boomed. According to the Scotus scholar Tobias Hoffmann, 20 per cent of all the Scotus scholarship produced over the past 70 years was produced in the past seven years. This explosion of interest in Scotus offers as good an occasion as any for introducing this brilliant and enigmatic thinker to a new audience. Some of Scotus’s theological concerns are bound, at first glance, to seem irrelevant to secular readers, but theology for Scotus was both a subject in its own right and the context in which to engage in distinctively philosophical activity: from the problem of universals to the grounds of moral authority, from the mind-body relation to the relations between mind, word and world, from the intelligibility of religious language to rational proofs of God’s existence, Scotus has something interesting to say in most of the major contemporary subfields of philosophy. Of his life, there is, sadly, not much we can say. Probably he was born in the town of Duns in Scotland, in 1265 or 1266. He got involved in the Franciscan movement as a boy, and his Franciscan superiors sent him to their house of studies in Oxford, perhaps around 1280. There he studied the liberal arts and went on to study theology. He was ordained a priest in 1291. By the early 1290s, he had made his first steps as a professional theologian, lecturing at Oxford on Peter Lombard’s Sentences, a standard textbook of theology that served as a de facto syllabus for theology courses at the universities of Oxford and Paris throughout the 13th and 14th centuries. But he also began what was to be a lifelong side interest in writing on Aristotle, producing commentaries on most of the logical works, and at least beginning commentaries on On the Soul and Metaphysics, which he later finished at Paris. Why Scotus was sent there is not known. Also unknown is the cause of his untimely death He continued lecturing on the Sentences after his move to Paris sometime before the start of the academic year in 1302. The published versions of these lectures form the bulk of his literary output. We have three distinct versions: the early Lectura, completed and published at Oxford; the middle Ordinatio, started at Oxford; and the later Reportationes, a chaotic collection of student reports on Scotus’s lectures. Of these, the Ordinatio is the most polished and is the closest we have to a complete commentary by Scotus on the Sentences – ‘ordinatio’ itself means, roughly, ‘carefully edited’. In 1303 he was temporarily exiled from Paris for his support of Pope Boniface VIII over King Philip IV in their dispute over taxation of Church properties. It is not known what Scotus did during this exile, but probably he returned to Oxford and may have spent at least part of the time lecturing at Cambridge. After a year, he was able to return to Paris, where, in 1305, he finally earned his doctorate in theology and presided for a couple of years as the Regent Master of Theology. During his Regency, Scotus conducted a ‘quodlibetal dispute’, a formal academic event at which members of the audience could ask the Master questions on any topic whatsoever. Scotus later published a set of Quodlibetal Questions based on this dispute. In 1307, Scotus left Paris and took up the far less prestigious post of lector at the Franciscan house of studies in Cologne. A lector at such a house would have the primary teaching responsibility of the friars residing in that house. Compared with the Franciscan house at Paris, let alone the University of Paris, the Franciscan house at Cologne was a backwater. Why Scotus was sent there is not known. Also unknown is the cause of his untimely death in 1308, about a year after arriving in Cologne. It is, of course, disappointing to have so few details of Scotus’ life. And yet in this very lack there is a lesson about what Scotus’s life was really about. We do not know why he was sent to Cologne at the height of his Parisian success, but we do know that it is very Franciscan to shun worldly acclaim. Scotus was, after all, a Franciscan friar, and the religious order St Francis founded is officially called the order of the Little Brothers of Francis, as a testimony to the poverty and humility they aspired to. It is easy to imagine Scotus the Franciscan willingly taking on a job in Cologne that would result in less time to write, fewer opportunities to dazzle influential peers in philosophical disputation, and hence less fame and prestige than he would have had by staying at Paris. Given his vocation as a Franciscan friar and a priest, it comes as little surprise that God’s existence and nature, and how we ought to live in light of God, were the central (but not only) topics of Scotus’s philosophical work. But it would be a mistake to think of Scotus’s philosophical efforts as so many attempts to rationalise previously settled dogma – this would be unfair to Scotus, given the extremely high argumentative standards he set for himself. He was confident that we can know God’s existence by the unaided work of natural reason One dogma that he thought philosophy could demonstrate was the existence of God. As a Catholic theologian, he believed by faith that God exists, but he also thought that philosophy, or natural reason, could demonstrate that there is a supreme nature that is the first cause of everything else, is the ultimate purpose for which everything else exists, and is the most perfect being possible. Moreover, this supreme nature has an intellect and will, and so is personal, and has all the traditional divine attributes such as wisdom, justice, love and power. In short, Scotus thinks that philosophy, unaided by theology, can demonstrate God’s existence. His case is elaborate, developed over 30,000 words in his Tractatus de primo principio – a work I recently translated and wrote a commentary on (forthcoming this year with Hackett Publishing Company) – a virtuosic exercise in the high scholastic style. It develops a sort of hybrid argument influenced by both Aristotelian-Thomistic ‘cosmological’ arguments that approach God from the causal structure of the world, and Anselmian ‘ontological’ arguments that try to establish God’s actual existence from peculiar features of the idea of God. It is widely regarded by specialists as the most rigorous effort to prove God’s existence undertaken in the medieval period. But while Scotus was confident that we can know God’s existence and many divine attributes by the unaided work of natural reason, he did not think we can, in this way, know everything that there is to know about God. As a Christian, Scotus believed that God is a ‘Trinity’ of divine persons – three persons sharing the one divine nature. But he did not think that we could know this fact about God apart from divine revelation. He extended this intellectual modesty to other distinctively Christian doctrines such as the resurrection of the dead: he thought that philosophy can show that it is probable that human beings have immortal souls, but that belief in the resurrection of the dead (and so the reunification of souls with bodies) is something believed by faith – not opposed to reason but not discoverable by reason. While Scotus thought that some of his religious commitments could not be proved by reason, he did not think that his religious commitments contradicted anything that reason could show to be true. In this respect, Scotus is an heir of the long tradition of Christian thought that affirms the harmony of faith and reason. Here Scotus is in lockstep with Thomas Aquinas: both think that God’s existence can be demonstrated but that God’s being a Trinity cannot. Scotus and Aquinas were not in lockstep on every topic, however. One of the most infamous differences between these two great medieval thinkers concerns their views about how our words and concepts work when we try to think and speak about God. Each believed that our thought and language develop from our experience of the world around us. And each recognised that God is not among these familiar objects of experience. So, for both thinkers, it is equally important to offer some sort of theory about how it is that we can think and speak coherently and meaningfully about God using concepts and words tailored to finite, sensible things. Aquinas adopted the view that, applied to God, our concepts and words have only analogous meaning. For example, ‘wisdom’ as applied to God is only analogously related to ‘wisdom’ as applied to a creature, such as Socrates. Scotus offered a slightly different theory. He argued that at least some of our words and concepts have exactly the same meaning when applied to God as they have when applied to creatures – they are ‘univocal’ (same in meaning), not merely analogous. ‘Being’ itself is the most important of these univocal concepts and terms. Scotus thinks that when we say ‘God is a being’ and ‘Socrates is a being’, ‘being’ has exactly the same meaning in the one as in the other. Scotus affirms that this is exactly the gap that yawns between creatures and God To some, this view is startling, even scandalous. Influential writers like Amos Funkenstein and John Milbank think that Scotus’s doctrine of univocity caused monumental changes to Western society. In The Unintended Reformation (2012), Brad Gregory argues that univocity led to the ‘domestication of God’s transcendence’ and the rise of secularism, an ontological flattening in which God and creatures are metaphysically on par, where God is just one more theoretical entity among many, able to be discarded if alternative scientific theories explain data better than theological alternatives. As the sciences progressed and found less and less need of God, religious belief and practice found itself more and more relegated to a subjective realm of feelings and blind faith. Eventually, the sciences, now operating on totally naturalistic assumptions, were given sole responsibility for describing the world objectively. Whether one welcomes or laments these societal changes, the dunces know that Scotus cannot be responsible for them. To hold that we human beings possess a concept that applies equally to God as it does to creatures does not entail or even remotely suggest that God exists just like creatures exist. Scotus’s controversial doctrine of univocity is, at worst, harmless for theology. To see this, it is important to keep in mind that Scotus’s doctrine of univocity is itself undergirded by a theory of concepts according to which most of our concepts are themselves complex, able to be analysed down into simpler conceptual components. For example, the most general concept we have by which to think about a creature, as a creature, is finite being. This complex concept does not apply to God. But the complex concept infinite being does apply to God – in fact, it is, according to Scotus, the most adequate concept we have (by natural reason alone) for thinking about God. And infinite being, of course, applies to no creature. But notice that each of these concepts – finite being and infinite being – is complex, and each includes being as a simple conceptual component. So, on Scotus’s view, if something is a finite being, then it is a being; and, likewise, if something is an infinite being, then it is a being. At this simplest conceptual level, we have just one concept of being that applies to God and creatures. There cannot be a greater ‘gap’ than that between finite and infinite – and Scotus affirms that this is exactly the gap that yawns between creatures and God. But this gap has nothing to do with the fact that the concepts finite being and infinite being share the simple conceptual component of being as such. If Scotus’s doctrine of univocity is to be faulted, therefore, it cannot be for failing to mind the gap between God and creatures. Relevant criticism might take issue with his theory of complex concepts that gives rise to his theory of univocity – but that is a topic for philosophy of mind and philosophy of language, not theology. Scotus declares over and over again that God is the highest good, indeed goodness itself, and that God is truth itself. Given his understanding of how our concepts work when we apply them to God – univocally, as we saw above – Scotus did not think that, when we call God ‘good’ or ‘true’, we are in the dark about what God’s goodness and truth amount to. Sure, we cannot comprehend the infinity of God’s goodness, but we can be confident that, if it’s true that God is good, then God’s goodness is intelligible to us. The intelligibility of divine goodness acts as a sort of conceptual constraint to Scotus’s theorising about God’s relationship to morality. In Scotus we find two grounds or sources of moral norms: on the one hand, following Aristotle, Scotus thought that it is evident from the natures of human beings what is good and bad for us, and this sort of ‘natural goodness’ yields a wide range of norms about right or wrong. But on the other hand, Scotus emphasised God’s freedom over the moral order. God’s commands – eg, thou shalt love thy neighbour as thyself; thou shalt not kill – themselves generate moral obligations, and God’s commanding need not track in every way what can be discerned merely by reflection on human nature. Scotus considers the command to Adam and Eve not to eat the fruit of a certain tree in the Garden of Eden – if God had not commanded them not to, it wouldn’t have been wrong. But God’s freedom over morality itself neither negates what we can discover on our own about right and wrong, nor entails that God’s freely instituted moral norms can invert the natural moral order. Scotus’s traditional insistence that human nature is a source of moral norms is itself supported by his broader realism about universals. In the old dispute, realists hold that there is something real, independent of our thinking, about common natures (nowadays more often called universals). Each of us is a human being, and the humanity we share is itself something real, existing independently of anyone’s forming a concept of humanity. Nominalists, by contrast, deny that common natures like humanity have any sort of mind-independent existence. For them, there are indeed individual humans, but humanity is merely a concept or word. Duns Scotus is one of the more emphatic realists of the Middle Ages, while William of Ockham, a Franciscan who died four decades after Scotus, is probably the most famous medieval nominalist. Scotus innovates, inventing an entirely new kind of entity: a property that, at most, one thing can have Realism about common natures gives rise to a philosophical puzzle that the nominalist need not take up: if humanity is something we all share, what makes us the individuals we are? Put another way, if our collective humanity is one, what explains how there are many humans? It is in answer to this question that Scotus develops his doctrine of ‘haecceity’: each individual belongs to the kind it does due its common nature, but is the individual it is due to its haecceity. ‘Haecceity’ literally means ‘thisness’. It is that feature, unique to each of us, that makes each of us some particular human being. Every other type of property that a thing can have – colour, shape, size, duration, place, and so on – is in principle shareable by something else. Therefore, these shareable properties cannot explain our individuality. So Scotus innovates, inventing (or discovering) an entirely new kind of entity: a property that, at most, one thing can have. Your haecceity is that feature of yours that only you can have. To see how radical this theory is, consider Thomas Aquinas’ own answer to the question about what individuates things that share a common nature. Aquinas thought that each individual has a particular chunk of matter of a certain quantity, and this chunk of ‘quantified matter’ serves to individuate individual things. So you and I share humanity in common, but I am I because of my matter, and you are you because of your matter. There is something wholesome and simple about Aquinas’ theory, but Scotus criticises it on the grounds that, even if we suppose that you and I cannot share the same matter at the same time, it remains that matter itself, even some particular quantity of matter, is shareable (even if only at different times) and so is unsuitable for making an individual thing to be the very individual it is. Scotus’s haecceity really is a new kind of thing in the history of metaphysics: something real, something that really characterises the thing that has it – but something that is entirely unique to its bearer. Scotus’s doctrine of haecceity is yet another of his views in which some have discerned world-historical significance. In A Secular Age (2007), Charles Taylor, inspired by Louis Dupré, said that Ockham the nominalist and Scotus the realist share a focus on individuality that gives ‘a new status to the particular’, and marks ‘a major turning point in the history of Western civilisation, an important step towards that primacy of the individual which defines our culture.’ I confess I am often tempted to make sweeping historical conclusions about the medieval figures I work on. If I could believe them, I might think my research is more important than it is, and conduct my work with extra vigour. In a Taylorian spirit, for example, I might say that Ockham and Scotus, along with their predecessor Aquinas, with the focus on individuals these three share, gave rise to the primacy of the individual that defines our culture. Or, in the same spirit but with a greater sense of boldness, I might say that Aquinas, with his materialistic answer to the problem of individuation, along with Scotus and Ockham, who believed in the existence of matter, together ushered in the pervasive materialism of contemporary science and culture. It is just as possible for a person of the 21st century as of the 14th to wonder whether God exists Of course, it would take a reckless frame of mind to believe either of these assertions: the connections drawn between Aquinas, Scotus and Ockham are insufficiently robust to unite them as common causes of the historical events attributed to them. But that’s the point: a theory of nominalism is about individuals in some sense (since it asserts there are only individuals) and so, too, a theory of haecceity is about individuals in some sense (since it asserts an individuating entity in addition to the common nature). But these theories are about individuals in radically different senses, just as Aquinas’s materialistic solution to the problem of individuation is about matter in a sense radically different from the sense in which, say, Thomas Hobbes is a materialist about human minds. Therefore, they should not be lumped together as common causes of the same historical event. Ockham’s denial that there is such a thing as human nature does seem like the sort of denial that would affect the way ordinary people live their lives, if it ever came to influence them. The same can be said of Scotus’s affirmation that there is such a thing as human nature. But it would be rather surprising – and a mere accident – if the denial and affirmation of exactly the same view had exactly the same influence on how people live their lives. As a Scotus scholar, I welcome this century’s revival of interest in Scotus. But a more fruitful way to indulge that interest, especially for those just starting their intellectual journey with Duns Scotus, is simply to try to take him on his own terms, engaging first-order questions of philosophy and theology with Scotus, and resisting the storyteller’s urge to situate this or that feature of Scotus’s thought within a narrative that explains why we are where we are now. It really is just as possible for a person of the 21st century as it was for a person of the 14th to wonder whether God exists, or whether universals are real, or whether objective morality requires a divine lawgiver. When we ask these questions now, we’re asking the very same questions they were asking then. And, thanks to the efforts of the dunces who for centuries have kept alive Scotus’s memory, editing and transmitting his texts, and writing papers and books trying to explain his thought, we can welcome Scotus into our own puzzlings over these and other perennial questions. At the speed of philosophy, 1308 is not so very far away after all. Source of the article

GOATReads: Psychology

Why You're Too Tired to Parent the Way You Want To

Depleted parents can't access the parenting tools they know. Key points • Chronic stress limits access to brain regions responsible for self-control and empathy during parenting. • Your nervous system may activate childhood patterns before your brain can intervene in stressful moments. • Stress, sleep deprivation, and mental health issues deplete the resources emotional regulation needs. Finding effective parenting advice on discipline can feel overwhelming especially when your kids won't listen, and you're tired of parenting struggles. Many parents wonder why parenting is so hard these days, and look for practical parenting strategies for defiance that actually work. On the Your Parenting Mojo podcast, I spoke with parents Adriana and Tim about what it's like to reach that breaking point—when you're tired of parenting but still want to do right by your kids. This post explores why even well-informed parents struggle to use the parenting tools they know—and what's really happening when you can't parent the way you want to. Why Parenting Feels So Hard These Days Parenting has always been demanding. But today's parents face unique challenges. We're trying to stay calm, empathic, and connected while juggling endless responsibilities, limited rest, and constant comparison on social media. No wonder parenting feels so stressful. Here's the core problem: Most parents today are emotionally aware enough to know what to do, but they're too depleted to actually do it. This gap between knowledge and capacity is where exhaustion turns even gentle parenting into frustration. Adriana captured this perfectly: "My values did not align with my actions as much as I wanted them to." She and Tim had read so many books, and listened to endless podcasts. They understood respectful parenting. And when they were depleted—when both kids were hungry and screaming and one just threw a toy at the other one's head—they defaulted right back to what they saw growing up. Why Generic Advice Falls Short The parenting books don't know your specific triggers. They can't tell you how to work with your nervous system when your child screams "I hate you!" and your whole body floods with cortisol because that's exactly what your father used to say before things got violent. This happens because our nervous system stores patterns from childhood and activates them before our thinking brain can intervene. Tim grew up hearing "men don't cry" and "don't let anybody disrespect you". Adriana grew up in an abusive, neglectful environment, basically raising her younger brothers while their mother struggled with alcoholism. They'd both done recovery work. They had good values. And their bodies still reacted before their brains could catch up. Even if you had a "normal" childhood, it’s possible that your needs weren’t met, which could have created a trauma-like response in you that’s now expressed as anger toward your kids. You're trying to implement new skills during the worst possible conditions. The skills you practiced in calm moments don't automatically transfer to high-stress situations without support and practice. That's why all those memes you've saved from Instagram or TikTok don't help: When you're actually stressed, everything you know flies out the window. What Happens When You're Too Tired to Parent We're used to thinking of exhaustion as being about sleep, but parental burnout is different. It's more like emotional depletion. When your stress levels stay high, your brain's capacity for patience, reasoning, and empathy drops. You might know the "right" response but find yourself yelling, shutting down, or giving in. Research shows that chronic stress limits access to the parts of the brain responsible for self-control and empathy. When your nervous system is dysregulated, no amount of conscious effort can override the body's stress response. Adriana struggled with postpartum depression and anxiety for two years after having her second child. "Treating my mental health problems is more than just 'go take a bath.' The bath totally helped. But there was more to be done." She knew what she was supposed to do. And she still couldn't do it when she was in the thick of it. Signs You're Operating on Empty You know what to do but can't actually do it. You snap before you can stop yourself. You say things you regret. You parent in ways that don't match your values. This happens because emotional regulation requires significant cognitive resources. When those resources are depleted by stress, sleep deprivation, or mental health challenges, your brain literally cannot access the tools you know intellectually. Adriana and Tim kept asking themselves: "When are we going to stop just surviving?" They were doing everything they could: mindfulness, meditation, reading books, listening to podcasts. And every day still felt like just making it to bedtime. When you have multiple kids, it can sometimes seem impossible to meet all their needs simultaneously. Both kids melting down at the same time. Both desperately wanting to be held. One child crying while you're helping the other. Everyone upset — and then you explode, and feel guilt and shame for it. You apologize to your kids and say it won't happen again…and feel shame all over again when they say: "But you said that last week too." When Parenting Advice Backfires Well-intentioned advice like "stay calm" or "take a deep breath" can create shame when you can't implement it. You beat yourself up for not being able to do what seems simple on paper. You wonder what's wrong with you. Nothing is wrong with you. You're trying to use tools designed for calm conditions in emergency conditions. Your nervous system is doing exactly what it learned to do to keep you safe; it's just not what your kids need right now. Understanding this distinction between your capacity and your values is essential for healing the shame that keeps you stuck. Final Thoughts The gap between knowing what to do and actually being able to do is a capacity issue. Your nervous system is responding to stress exactly how it was trained to respond: through patterns formed in your childhood. When you're depleted, your brain can't access the parenting tools you know intellectually. Generic advice fails because it doesn't account for your specific triggers, your nervous system's patterns, or the reality of trying to learn new skills under high-stress conditions. But recognizing depletion as the root cause rather than blaming yourself or your child opens up new possibilities. Source of the article

GOATReads:Politics

The case for and against counting castes in India

Counting castes in India has always been about more than numbers - it is about who gets a share of government benefits and who doesn't. The country's next national census, scheduled for 2027, will - for the first time in nearly a century - count every caste, a social hierarchy that has long outlived kingdoms, empires and ideologies. The move ends decades of political hesitation and follows pressure from opposition parties and at least three states that have already gone ahead with their own surveys. A 2011 survey - neither run nor verified by census authorities or released by the government - recorded an astonishing 4.6 million caste names. A full count of castes promises a sharper picture of who truly benefits from affirmative action and who is left behind. Advocates say it could make welfare spending more targeted and help recalibrate quotas in jobs and education with hard evidence. Yet in a provocative new book, The Caste Con Census, scholar-activist Anand Teltumbde warns that the exercise may harden the deeply discriminatory caste system, when the need is to dismantle it. The argument cuts against the prevailing view that better data will produce fairer policy. For Mr Teltumbde, castes are "too pernicious to be managed for any progressive purpose". "Caste is, at its core, a hierarchy seeking impulse that defies measurement," he writes. Mr Teltumbde sees the modern caste census as a colonial echo. British administrators began counting castes in 1871 as a "deliberate response to the post-1857 unity of Indians across caste and religion", turning it into an "effective tool of imperial control". They held six caste censuses between 1871 and 1931 - the last full caste enumeration in India. Each count, Mr Teltumbde argues, "did not merely record caste, but reified and hardened it". Independent India, in Mr Teltumbde's reading, preserved the system under the moral banner of social justice, "while effectively evading its core obligation of building the capacities of all people, which is a prerequisite for the success of any genuine social justice policy". The obsession with counting, he says, bureaucratises inequality. By turning caste into a ledger of entitlements and grievances, the census reduces politics to arithmetic - who gets how much - rather than addressing what Mr Teltumbde calls the "architecture of social injustice". He sees the demand for a caste census as a push for more reservations - a cause driven by an "upwardly mobile minority", while the majority slips into deprivation and dependence on state aid. Nearly 800 million Indians, he notes, now rely on free rations. Affirmative action quotas were first reserved for Dalits - formerly known as untouchables - and Adivasis (tribespeople), India's most oppressed groups. But soon, the less disadvantaged "other backward classes" (OBCs) began clamouring for a share of the pie. Politics quickly coalesced around demands for new or bigger caste-based quotas. Mr Teltumbde's deeper worry is that enumeration legitimises what it measures. Political parties, he warns, will exploit the data to redraw quotas or convert caste resentment into electoral capital. For Mr Teltumbde, the only rational politics is one of "annihilation of caste", not its management - echoing what BR Ambedkar, the architect of India's constitution, argued when he said that caste cannot be reformed, it "must be destroyed". But in an India where even its victims "see value in its preservation", that goal feels utopian, the author admits. The looming caste census, Mr Teltumbde argues, will not expose inequality but entrench it. Many scholars don't quite agree, seeing the census as a necessary tool for achieving social justice. Sociologist Satish Deshpande and economist Mary E John call the decision not to count castes "one of independent India's biggest mistakes". Today, they note in a paper, caste has come to be seen as the burden only of India's lower castes - Dalits and Adivasis - who must constantly prove their identity through official labels. What's needed, they write, is "a fuller, more inclusive picture where everyone must answer the question of their caste". This isn't an "endorsement of an unequal system", they stress, but a recognition that "there is no caste disprivilege without a corresponding privilege accruing to some other caste". In other words, the lack of reliable caste data obscures both privilege and deprivation. Sociologist and demographer Sonalde Desai told me that without a fresh caste census, India's affirmative action policies operate "blindly", relying on outdated colonial data. "If surveys and censuses could shape social reality, we would not need social policies. We could simply start asking questions about domestic violence to shame people into refraining from wife-beating. We have not asked any questions in the census about caste since 1931. Has it eliminated caste equations?" she asks. Political scientist Sudha Pai, however, broadly agrees with Mr Teltumbde's critique that counting castes can solidify identities and distract from deeper inequalities based on "land, education, power and dignity". Yet she acknowledges that caste has already been politicised through welfare and electoral strategies, making a caste census inevitable. "A caste census would be useful if the income levels within each caste group are collected. The government could then use the data collected to identify within each caste the needs of the truly needy and offer them the required benefits and opportunities, such as education and jobs for upward mobility," Dr Pai says. "This would require moving away from simply using caste as the parameter for redistribution of available resources, to use of both caste and income levels in policymaking." Dr Pai argues that if done "thoughtfully" - linking caste data to income and educational indicators - it could shift India from a "caste-based to a rights-based welfare system". Yet, scholars warn that counting castes and interpreting the data will be fraught with challenges. "It won't be painless. India has changed tremendously in the century since 1931. Castes that were designated as being poor and vulnerable may have moved out of poverty, some new vulnerabilities may have emerged. So if we are to engage in this exercise honestly, it cannot be done without reshuffling the groups that are eligible for benefits," says Professor Desai. Another challenge lies in data collection - castes have many subgroups, raising questions about the right level of classification. Sub-categorisation aims to divide broader caste groups into smaller ones so the most disadvantaged among them receive a fair share of quotas and benefits. "Castes are not made of a single layer. There are many subgroups within a single caste. What level of aggregation should be used? How will the respondents in a census respond to this question? This requires substantial experimentation. I do not believe this has yet been done," says Prof Desai. Mr Teltumbde remains unconvinced. He argues that endless enumeration cannot remedy a system built on hierarchy. "You will be counting all your life and still not solve the caste problem. So what will be the use of that counting?," he wonders. "I am not against affirmative action, but this is not the way to do it." Source of the article

Astronomy’s first gap-clearing planet fills in our “missing link”

Planets grow from protostellar material in disks, leading to full-grown planetary systems in time. At last, the final gap has been filled. Here in our Universe, one thing we could have been certain of, even before we began to examine or even detect worlds beyond our own, is that the Universe does have a mechanism for creating planets and planetary systems in orbit around stars. We have some supremely strong evidence that indicates there must be a pathway for that to occur: the existence of Earth and the other planets orbiting our own Sun. Because we exist, and our planet and the other planets in the Solar System exist, then the Universe must have some way of creating these planets. So how is it, exactly, that planets actually form within our Universe? To answer that question, we need to look to the Universe itself. Sure, we have theories that detail how planets could form, and by combining two fields of astronomy that might seem barely related — cosmology and exoplanet studies — we can learn an awful lot about the cosmic story that brings planets into existence. But even with all we learn from that, including the conditions under which stars can come to possess planets, we still have gaps in our understanding. In an ideal world, we’d have no gaps at all: we’d be able to trace the story of planet formation, step-by-step, from a pre-stellar cloud of material to a fully grown-up and evolved system of mature planets. Since we don’t have tens of millions of years to sit around and watch a system form and evolve, this might seem like an impossibility. But with the new discovery here in 2025 of planet WISPIT 2b, we’ve finally filled in the last “missing link” in the cosmic story of planet formation. Here’s what we know and how we got there. From a cosmic perspective, we know that the very first stars in the Universe couldn’t have had planets at all. In the aftermath of the hot Big Bang, the Universe went through several important phases in its early evolution. An early quark-gluon plasma state cooled, creating bound hadrons: specifically leading to a dense, expanding sea filled with protons and neutrons. Slightly later, nuclear reactions began occurring, as protons and neutrons fused together without immediately being blasted apart, creating an initial abundance of the light elements and their isotopes. And then, significantly later, neutral atoms formed, followed by the gravitational growth of overdense regions. Once enough matter accumulates in one pocket of space, star-formation, for the first time in the Universe, can finally occur. But back in these early stages, planet formation is impossible. When these new stars form, sure, there’s going to be abundant reservoirs of material surrounding them: material that you’d think could wind up forming a planet. However, that material is almost exclusively hydrogen and helium: some 99.99999991% percent hydrogen and helium, by mass. With so few heavy elements, whatever doesn’t become a star simply gets blown away. What would it take to enable planets to form, then? We’d need, at the very least, for sufficient enrichment of that star-forming material to enable the existence of planets at all. Here in our own Solar System, where we have the eight known planets, we can be confident that we’re above that enrichment threshold. But is there a hard line, above which we’re all but guaranteed to form planets, while below it, planet formation is forbidden? To answer that question, we have a way of finding out: we can look at the stars in our vicinity and search for planets around them. Then, at the same time, we can measure the heavy element content (what astronomers call “metallicity”) of the parent star (or stars), and see which stars do and don’t have planets. As it turns out, here in our modern Milky Way, about 80-90% of the stars that we can detect are consistent with having planets around them, but not 100%. It appears that, if you have about 25% or more of the heavy elements found in our Sun, you’re almost guaranteed to have planets. If you go down to between 8% and 25% of our Sun’s heavy elements, you may or may not have planets. And if you look down at star systems with below 8% of the Sun’s heavy elements, very few of them have planets, with no systems below 1% having any planets at all. With over 6000 detected exoplanets in the book, this tells us where planets have and haven’t formed. That information serves as the starting point for our big planet-formation question: how do we go from a cloud of gas that’s going to collapse to form stars to a full-fledged star system with a system of planets around it? Before we get to the evidence, we should be fair to the theorists, and note that there’s been at least an outline of a theory for planet formation that’s many decades old: older than any of the observational evidence we have for how planets actually do form. In sequence, the steps that should occur look like the following. First, a cloud of gas collapses and fragments, leading to the existence of many different sites within a gas cloud where either a new star (singular) or a system of new stars (two or more) will form. Next, around these protostars, a disk-like distribution of gas and dust — made from the same elements that the star and its progenitor gas cloud are made from — comes to surround them. After some time as a homogeneous disk, instabilities begin to appear, including gaps, spirals, and dense rings of material, leading to feature-rich structure within those disks. At some point, usually after the protostar’s ignition into a full-fledged star, that circumstellar material (i.e., material that surrounds the star) gets blown away, eliminating the protoplanetary disk and leaving just a series of planets, plus the remnant dusty debris. And finally, later in the star system’s life, the dusty debris gets eliminated as well, leaving just a mature planetary system behind. That is, in theory, at least, how planets ought to form. Many of these steps have strong evidence supporting them. For example, as you can see in the above image, you can look inside of star-forming regions and observe the protostellar cores of a variety of newly forming stars found within. What we find by doing so is extremely reassuring: that gas clouds that collapse to form new stars do indeed undergo fragmentation. When we see a newly formed star cluster, with hundreds, thousands, or even tens of thousands of new stars inside, it’s easy to assume that it’s only much later — when the cluster dissociates — that our modern star systems mostly made of singlet, binary, and trinary star systems (with a few larger multi-star systems) arise. But this modern, high-resolution data, acquired with telescopes like the Atacama Large Millimetre/sub-millimetre Array (ALMA), shows that binary and trinary systems are common even from the earliest stages of star-formation, and that while singlet stars are still the majority, it’s mostly the low-mass stars that form singlets, while the highest-mass stars tend to wind up in multi-star systems. It looks like, based on the best observational data we have, that the first theorized step in forming planets has gotten it exactly right: gas clouds collapse and fragment, leading to the existence of many different, disconnected sites where new stars and protostars arise. Then, we can look into slightly more evolved star-forming regions, such as the nearby Orion Nebula, and find young stars and protostars that still have protoplanetary disks around them. Indeed, these systems are incredibly common wherever ongoing new star birth is happening, with the Orion Nebula simply representing the closest location to us where a large amount of new star-formation is still ongoing. Over 100 protoplanetary disk-containing objects — young stars and protostars — have been spotted in the Orion Nebula alone with the combined data from Hubble, JWST, ALMA, plus other infrared and radio telescopes. Originally, these protoplanetary disks appeared to us as mere blobs: as dark silhouettes in visible light and as bright sources of emitted light at infrared wavelengths. However, as we began to leverage better techniques, with high-resolution imagery enhanced by modern instrumentation and through the technique of very-long baseline interferometry, we began to probe these protoplanetary disks for features within them. Particularly useful when the disk is seen face-on (as opposed to edge-on or highly inclined), we sometimes saw uniform, featureless disks, but at other times we’d see features like spiral waves, rings, or gaps within these disks. Starting in the early 2020s, we began to see an age difference between the systems that were featureless and the ones that exhibited non-uniform features. In particular, there were three categories that these protoplanetary disks fell into: systems under 0.5 million years in age, all of which appeared to have uniform disks, systems older than 2 million years, all of which appeared to have feature-rich disks, and systems between 0.5-and-2 million years, where some have uniform disks and some display features. Also, systems that were significantly older than 10 million years in age tended to lack protoplanetary disks entirely, indicating that the process of planet formation begins early and completes in relatively swift fashion in the Universe. Finding features such as “gaps” and “rings” in protoplanetary disks are relatively common, and it’s generally suspected that the reason for these gaps-and-rings is simple: those are regions where the protoplanetary material has been “vacuumed up” by planets and protoplanets that are forming in precisely those locations. There’s no material there anymore because it has already formed into a planet; the young planet has already cleared its orbit of potentially planet-forming material. This was bolstered in 2023, with the detection of exoplanets PDS 70b and PDS 70c in the same system: found in the inner portions of a cleared-out young star system, one that still had an extant outer protoplanetary disk. At still later times, of course, we’ve detected many fully mature planets within planetary systems — including via direct imaging when they’re well-enough separated from the parent star — both within systems that still have a debris disk and in systems whose dusty debris disk has fully evaporated. It would seem, then, that we’ve come a phenomenally long way in learning where exoplanets come from. Protostar cores form from gravitational collapse, those protostars develop circumstellar disks, those disks develop instabilities which lead to gaps in the disk, where protoplanets and eventually full-fledged planets form within those disks, the disks themselves then evaporate, leaving mature planetary systems behind. However, there’s been a missing link in this chain of understanding for a long time now. Although we can image the disks and see the gaps within them, and we can directly image planets at later stages of evolution orbiting their stars, we’ve never seen a disk, with gaps, that also contains an observable planet within those gaps. In other words, we’ve only suspected the presence of planets within these gaps in protoplanetary disks; we’ve never detected one directly. Or, at least, that was the case until just a couple of months ago here in 2025, when the first planet within a protoplanetary disk gap was discovered: WISPIT 2b. In a pair of papers recently published in the Astrophysical Journal Letters, high-resolution direct imaging of the protoplanetary disk around the solar-analog star WISPIT 2 revealed many different properties. There is: an extended disk, spanning hundreds of times the Earth-Sun distance, with a multi-ringed substructure within the disk, hinting at the presence of planets in the gaps between the rings, and a young, massive protoplanet embedded within one of the gaps and co-moving along with its host star. That planet, WISPIT 2b, is the first unambiguous planet found within a multi-ringed disk, with an impressive mass of 4.9-5.3 times the mass of Jupiter. It’s well below the threshold for becoming a brown dwarf (which requires at least 13 Jupiter masses), the age of the parent star is consistent with the previously uncovered timeline of planet formation (it’s about 5 million years old), and the star itself is relatively nearby at 133 parsecs (~430 light-years) distant. The studies also suggest that mass is continuing to accumulate onto this young planet, growing at a rate of 4.5 quadrillion tons per year, or approximately by the mass of Mars’s larger moon Phobos on a daily basis. Although there’s also circumstantial evidence for a second, innermore, even more massive planet (around 9 times the mass of Jupiter) located closer to the parent star, the big news is that the most significant “missing link” in the planet-formation story — the disconnect between where gaps form and when planets appear — has now been filled in with the discovery of WISPIT 2b. Now, we can be certain: yes, there is indeed evidence that when we see a gap in these protoplanetary disks, we can be confident that planets do indeed form those gaps. The fact that the size of the gap and the mass of the planet are both compatible with theoretical models of the physics at play only strengthens the science case for this interpretation. Excitingly, this suggests that high-resolution direct imaging observations carried out for nearby young stars with current technology can reveal, at the very least, the most massive new planets to form in these star systems. Where we see gaps in protoplanetary disks, we now have direct evidence linking the presence of planets to the existence of those gaps: perhaps even a full 100% of gaps in these disks are caused by planets. The WISPIT survey, standing for Wide Separation Planets in Time, leverages the SPHERE instrument aboard the ESO’s Very Large Telescope and the University of Arizona’s MagAO-X adaptive optics system aboard Carnegie science’s Magellan telescope: two of our current generation of flagship-class telescopes. It’s almost a certainty that more planets will be found in protoplanetary disk gaps in the coming years, giving us our first end-to-end confirmation of a scenario for how the majority of planets in the Universe actually form. Source of the article

GOATReads: History

The Indian Guru Who Brought Eastern Spirituality to the West

A new biography explores the life of Vivekananda, a Hindu ascetic who promoted a more inclusive vision of religion One morning in September 1893, a 30-year-old Indian man sat on a curb on Chicago’s Dearborn Street wearing an orange turban and a rumpled scarlet robe. He had come to the United States to speak at the Parliament of the World’s Religions, part of the famous World Columbian Exposition. The trouble was, he hadn’t actually been invited. Now he was spending nights in a boxcar and days wandering around a foreign city. Unknown in America, the young Hindu man, named Vivekananda, was a revered spiritual teacher back home. By the time he left Chicago, he had accomplished his mission: to present Indian culture as broader, deeper and more sophisticated than anyone in the U.S. realized. Every American and European who dabbles in meditation or yoga today owes something to Vivekananda. Before his arrival in Chicago, no Indian guru had enjoyed a global platform quite like a world’s fair. Americans largely saw India as an exotic corner of the British Empire, filled with tigers and idol worshippers. The Parliament of the World’s Religions was meant to be a showcase for Protestantism, particularly mainline groups like Presbyterians, Baptists, Methodists and Episcopalians. So the audience was astonished when Vivekananda, a representative of the world’s oldest religion, seemed anything but primitive—the highly educated son of an attorney in Calcutta’s high court who spoke elegant English. He presented a paternal, all-inclusive vision of India that made America seem young and provincial. “I am proud to belong to a religion which has taught the world both tolerance and universal acceptance,” he declared on September 11, 1893. “We believe not only in universal toleration, but we accept all religions as true. I am proud to belong to a nation which has sheltered the persecuted and the refugees of all religions and all nations of the earth. I am proud to tell you that we have gathered in our bosom the purest remnant of the Israelites, who came to Southern India and took refuge with us in the very year in which their holy temple was shattered to pieces by Roman tyranny. I am proud to belong to the religion which has sheltered and is still fostering the remnant of the grand Zoroastrian nation.” Vivekananda was well-equipped to bridge cultural divides. As a young man named Narendranath Datta, he’d attended Christian schools where he’d been steeped in the Bible and European philosophy. According to one story, his introduction to Indian spirituality came by way of a lecture on English romantic literature. A professor, a Scottish clergyman, mentioned the ecstasies of a nearby guru called Ramakrishna during a discussion of transcendental experiences in William Wordsworth’s poem “The Excursion.” The students ended up paying Ramakrishna a visit, and Datta went on to embrace Ramakrishna as his guru and adopt a renunciate’s name, Vivekananda, which meant “the bliss of gaining wisdom.” Now, in Chicago, Vivekananda’s words were warm and inviting, but they were also the words of an activist. That same year, Mohandas Gandhi had arrived in South Africa, where he upended the social order by walking on whites-only paths and refusing to leave first-class railroad cars. Vivekananda likewise wanted to show the world that Indians would no longer be demeaned and defined by European occupiers. He found sympathetic audiences in America, a country that liked to think of itself as anti-colonialist (even as it was on the verge of annexing Hawaii and the Philippines). After speaking to the crowd in Chicago, Vivekananda traveled to Detroit, Boston and New York; he met people who’d been exploring new belief systems, including Christian Science. Many of his listeners were women who applauded his message that the divine was present in every human being, transcending gender and social status. Sarah Ellen Waldo, a relative of Ralph Waldo Emerson, later recalled the experience of strolling through Manhattan with Vivekananda by her side: “It required no little courage to walk up Broadway beside that flaming coat. As the Swami strode along in lordly indifference, with me just behind, half out of breath, every eye was turned on us.” Another female enthusiast was invigorated by “the air of freedom that blew through the room” when Vivekananda debated the president of Smith College. The woman’s father disapproved of her interest in the Indian guru, but when a new calf was born to her family, she defiantly named it “Veda” (after the Hindu scriptures of the same name). Vivekananda spent many of his remaining years traveling around the U.S. and Europe. He died of mysterious causes in 1902, at the age of 39. But generations of Indian gurus who traveled to the West went on to follow his highly successful approach, whether visiting British spiritualist societies or lecturing to middle-aged audiences in Los Angeles living rooms. In the 1960s, the Beatles launched a more youthful wave of interest when they visited India. But the underlying message of teachers from the East has changed little since Vivekananda’s first visit: The individual is cosmic, and meditation and yoga are universal tools for experiencing that underlying reality, compatible with any culture or religion. Such stories and insights about Vivekananda’s life come alive in Guru to the World, a rich and insightful new biography by Ruth Harris, a historian at the University of Oxford’s All Souls College. Smithsonian spoke to Harris about Vivekananda’s travels through the West and how they gave rise to a kind of Eastern spirituality that most Westerners would recognize today. Source of the article

GOATReads:Politics

COP30: Trump and many leaders are skipping it, so does the summit still have a point?

There is a photograph, taken ten years ago in Paris, that today seems like something of a relic. In it, dozens of men and women line up in dark suits, in front of an enormous sign that reads COP21 Paris. Right in the middle the UK's then-Prime Minister, David Cameron, grins widely, as he stands beside the future King Charles III, just in front of China’s Xi Jinping. Far off to the right is the then US President Barack Obama, deep in conversation with someone who is cut off from the frame - because there were so many leaders lining up that day that it was difficult for the photographer to capture them all at once. What a far cry from the family photograph taken on Thursday with this year's line-up at the COP30 summit in Brazil. Xi and Modi were no-shows, along with the leaders of around 160 other countries. And notably absent was the US President Donald Trump. In fact, the Trump administration has exited the process entirely and has said it will not send any high-level officials this year. Which raises the question, why have a two-week-long multinational gathering at all if so many leaders aren’t there? Christiana Figueres, the former head of the UN's climate process under whose leadership the Paris Agreement was struck, said during last year's gathering that the COP process was "not fit for purpose." "The golden era for multilateral diplomacy is over," agrees Joss Garman, a former climate activist who now heads a new think tank called Loom. "Climate politics is now more than ever about who captures and controls the economic benefits of new energy industries," he tells me. So, with carbon dioxide emissions still rising even after 29 of these meetings - which are, after all, aimed at bringing them down - will more COPs make any difference? Trump and the climate 'con job' On his first day back in office, Trump used his trademark marker pen to withdraw the US from the Paris Agreement, the 2015 UN treaty under which nations agreed to work together to try to keep global warming below 1.5°C. "This 'climate change' - it's the greatest con job ever perpetrated on the world," he told the UN General Assembly in September. "If you don't get away from this green scam, your country is going to fail." He has rolled back restrictions on oil, gas, and coal, signed billions of dollars of tax breaks for fossil fuel firms, and opened up federal lands for extraction. Plus Trump and his team have called on governments around the world to abandon their "pathetic" renewable energy programmes and buy US oil and gas - in some cases with the risk of punitive tariffs if they don't. Japan and South Korea as well as Europe have agreed to buy tens of billions of US hydrocarbons. The objective is clear: Trump says he wants to make the US the "number one energy superpower in the world". Meanwhile, he has set about dismantling his predecessor Joe Biden's clean energy agenda. Subsidies and tax breaks for wind and solar have been slashed, permits withdrawn, projects cancelled. Research funding has been cut too. "Wind power in the United States has been subsidised for 33 years - isn't that enough?" US Energy Secretary Chris Wright said when I asked him to explain the administration's policy when we met in September. "You've got to be able to walk on your own after 25 to 30 years of subsidies." John Podesta, a senior adviser on climate to both Obama and Biden, sees it differently. "The United States is taking a wrecking ball to clean energy," he argues. “They're trying to take us back not to the 20th Century, but the 19th." Last month, a landmark deal that would have cut global shipping emissions was abandoned after the US, along with Saudi Arabia, succeeded in ending the talks. Many supporters of the COP talks are concerned. What happens if the US path leads to other countries dialling down their commitments? Anna Aberg, a Research Fellow in Chatham House's Environment and Society Centre, describes COP as "taking place in a really difficult political context" given Trump's position. "I think it's more important than ever that this COP sends some kind of signal to the world that there are still governments and businesses and institutions that are acting on climate change.” It’s too late to win at table tennis Trump's strategy puts the US on a collision course with China, which has also been working for decades to dominate the world's energy supplies - but through clean technology. In 2023, clean technologies drove roughly 40% of China's economic growth, according to the climate website Carbon Brief. After a slight slowdown last year, renewables accounted for a quarter of all new growth and now make up more than 10% of the entire economy. And, like Trump's America, China is engaging internationally well beyond participation in COP - it is taking its entire energy model global. The split has transformed the climate debate. It is now one that pits the world's two superpowers against each other for control of the most essential industry on Earth. And it leaves the UK and Europe - as well as major emerging powers like India, Indonesia, Turkey, and Brazil - caught in the middle. Speaking at this year‘s conference, a source in government at a major developed country said: “Of all the things they're most terrified of, the biggest is being seen to criticise Trump.” The President of the European Commission, Ursula von der Leyen, warned last month that Europe must not repeat what she termed the mistakes of the past and lose another strategic industry to China. She called the loss of Europe's solar manufacturing base to cheaper Chinese rivals "a cautionary tale we must not forget". The European Commission has forecasted that the market for renewables and other clean energy sources will grow from €600bn (£528bn) to €2 trillion (£1.74tn) within a decade and wants Europe to capture at least 15% of that. But that ambition may come too late. "China is already the world's clean-tech superpower," says Li Shuo, director of the China Climate Hub at the Asia Policy Institute. Its dominance in solar, wind, EVs, and advanced battery technologies, he says, is now "virtually unassailable”. He likens it to trying to beat the Chinese national team at table tennis: "If you want to surpass China, you had to get your act together 25 years ago. If you want to do it now, you have no hope." China produces over 80% of the world's solar panels, a similar share of advanced batteries, 70% of EVs, and more than 60% of wind turbines - all at phenomenally low prices. The EU's recent move to raise tariffs on Chinese EVs reflects the scale of the dilemma. Open the market and Europe's car industry could collapse; close it and green targets may not be met. Restricting Chinese access to these markets may slow emissions reductions, says Joss Garman, but he argues, "If we ignore questions about economic security, jobs, national security, that risks undermining public and political support for the entire climate effort." COP: New purpose or pointless? Now, with these shifts in direction of global politics and priorities, Anna Aberg says she expects COP to become an annual forum for "holding to account" countries and other organisations, something she believes remains an "important role”. The gathering in Brazil follows the acknowledgement by UN Secretary-General António Guterres that the 1.5°C target set in Paris will be breached - this, he has said, represents "deadly negligence" on the part of the world community. Last year was the hottest ever recorded, and 60 leading climate scientists said in June that the Earth could breach 1.5°C in as little as three years at current levels of carbon dioxide emissions. Yet more people are questioning the need for an annual gathering. "I think we need one big COP every five years. And between that, I'm not sure what COP is for," says Michael Liebreich, founder of energy consultancy Bloomberg New Energy Finance and host of a green energy podcast, Cleaning Up. "You can't just expect politicians to go and make more and more commitments. You need time for industries to develop and for things to happen. You need the real economy to catch up." He believes it would be much more productive for the discussions to happen in smaller meetings focused on removing barriers to clean energy. But he also believes that some issues, like implementation, need to be discussed in places he deems more relevant - like on Wall Street "where people can actually fund stuff” - as opposed to on the edge of the Brazilian rainforest. Even so, this will be important negotiations at this year's COP. Among other things, it aims to get an agreement for a multi-billion-dollar fund to support the world's rainforests like the Amazon and the Congo Basin. Michael Jacobs, who advised Gordon Brown on climate policy and is now a politics professor at Sheffield University, believes that continued collective support for the process is crucial. "It's a big political message, because Donald Trump is trying to undermine the collective process, but it's also a message to businesses that they should continue to invest in decarbonisation because governments will continue to enact climate policies." The UK's Energy Secretary, Ed Miliband believes these meetings have delivered real progress by getting countries to engage with tackling climate change and enact policies that have made the renewable revolution possible. "It's dry, it's complicated, it's anguished, it's tiring,” he says - “and it's absolutely necessary”. Many now do, however, accept there is a strong argument for these international gatherings to be scaled down. Ultimately, however, the real choice underlying it, for so many nations in attendance, simply comes down to the extent to which they align with a China-led clean energy revolution - or double down on the fossil fuels–first agenda. Which is why many observers say the process of decarbonisation is going to be less about the multi-country commitments of COPs past, and far more about big-money deals between individual countries as we look ahead to this year’s summit - and how COPs may well play out in the future. Source of the article

How Did Food Stamps Begin?

The program was designed to aid American farmers and businesses—as well as the hungry—and had its largest expansion under a Republican president. The U.S. food stamp program was launched at a time when the nation was facing a tragic paradox: As millions of Americans suffered from hunger during the Great Depression, the country’s farmers agonized under a crushing bounty. The economic collapse of the 1930s had sapped food consumers of their purchasing power, so farmers found themselves with a glut of crops and livestock. That glut, in turn, sent agricultural prices plummeting. In order to create artificial scarcity and boost prices, the U.S. Department of Agriculture under President Franklin D. Roosevelt initially paid farmers to plow under their fields and slaughter their pigs. The destruction of food at a time when so many stomachs rumbled sparked an outcry that prompted the Federal Surplus Commodities Corporation (FSCC), a New Deal agency established in 1933, to instead purchase excess food and distribute it directly to the needy at little or no cost. This initiative, however, dampened business for grocers and food wholesalers, who complained of government interference and unfair competition in the marketplace. Facing the triple problems of farm surpluses, weak sales for grocers and hungry citizens at a time of 17 percent unemployment, the FSCC hoped tiny paper squares could solve its trilemma. Rochester, New York, then became the petri dish for a new government-run economic experiment. Food Stamps Debut in Rochester, NY On the morning of May 16, 1939, FSCC officials watched anxiously as they opened their doors inside Rochester’s old post office to launch the country’s latest relief measure. As newspaper reporters and photographers jockeyed for position to document history in the making, the first person in line approached a cashier window. Ralston Thayer, a 35-year-old machinist who had been out of work for nearly a year, handed a clerk $4 from his latest unemployment check and received $4 of orange stamps in return as well as $2 of blue stamps for free. The orange “food stamps” could be redeemed at any of the 1,200 participating Rochester groceries for any goods on the shelves, while blue stamps could only be used to buy surplus agricultural items such as butter, eggs, prunes, flour, oranges, cornmeal and beans. Grocers could exchange the food stamps for money at commercial banks and FSCC offices. “I never received surplus foods before, but the procedure seems simple enough and I certainly intend to take advantage of it,” Thayer told reporters. Throughout the day, approximately 2,000 Rochester residents followed in Thayer’s footsteps. For every $1 of orange stamps bought, they received 50 cents worth of blue stamps for free, thereby expanding their purchasing power by 50 percent. That afternoon, waves of customers poured into Rochester’s grocery stores with their crisp new booklets of orange and blue stamps in hand. Food stamp recipients approved of the new program, which gave them greater choice in what to eat, beyond just the surplus items being handed out by the government. “Now we’ll get the best in food,” one woman told the Rochester Democrat and Chronicle. “We can take our pick on these surplus commodities instead of taking what they give us.” Rochester grocers benefited as well as recipients channeled $50,000 into their coffers during the program’s first four days. “I was cleaned out of flour when the stamp rush started,” grocer Joseph Mutolo told FSCC officials when he became the first retailer to redeem the stamps. “That certainly is different from the old days when you gave food away at the big food depot. Then, when you gave away flour or butter, I sold none. Now it seems I can’t keep stocked up.” Building on the initial success, the food stamp program was rolled out to additional pilot cities and expanded to half the counties in the United States. Eligible Americans could buy between $1 and $1.50 in orange stamps weekly for each family member. The program fed 20 million Americans until it was discontinued in 1943 when the economic stimulus provided by World War II eased unemployment and crop surpluses. Food Stamps Revived Under JFK, Expanded Under Nixon President John F. Kennedy, who had been struck by the poverty he had witnessed in West Virginia during the 1960 Democratic primary campaign, revived food stamps as a pilot program as one of his first actions upon taking office in 1961. While recipients were still required to pay for their food stamps, the special stamps for surplus goods were eliminated. The Food Stamp Act of 1964, signed into law by President Lyndon B. Johnson on August 31, 1964, codified and expanded the program. “The food stamp plan will be one of our most valuable weapons for the war on poverty,” Johnson proclaimed at the signing ceremony. Although launched by Democratic presidents, the food stamp program saw its largest expansion under the stewardship of a Republican president, Richard Nixon, in the wake of Senator Robert F. Kennedy’s highly publicized trips to the Mississippi Delta and Appalachia, the Poor People’s Campaign of Dr. Martin Luther King, Jr. and the 1968 CBS documentary “Hunger in America,” which shocked viewers with images of starving children with sunken features and bloated bellies. “That hunger and malnutrition should persist in a land such as ours is embarrassing and intolerable,” Nixon asserted in a May 1969 message to Congress that expressed his determination “to put an end to hunger in America for all time.” During the course of Nixon’s presidency, the food stamp program grew fivefold from 3 million recipients in 1969 to 15 million by 1974. “Nixon was actually very supportive of many social programs, proposing the Family Assistance Program that would have benefited the working poor and expanding Social Security. Nixon’s expansion of food stamps is in line with his larger efforts,” says Matthew Gritter, a political science professor at Angelo State University and author of the book The Policy and Politics of Food Stamps and SNAP. The food stamp program continued to receive bipartisan support in the years that followed. Republican Senator Bob Dole and Democratic Senator George McGovern spearheaded the passage of the Food Stamp Reform Act of 1977, which strengthened anti-fraud provisions and eliminated the requirement that recipients purchase food stamp coupons. Food Stamps Become Electronic, Renamed SNAP Beginning in 1990, electronic benefit transfer cards, similar to debit cards tied to benefits accounts, replaced paper food stamps. The measure further reduced fraud, since recipients could no longer sell stamps instead of using them to purchase food. With the elimination of paper food stamps came a 2008 change in the program’s name to the Supplemental Nutrition Assistance Program (SNAP). Gritter says the biggest misconception about the history of the food stamp program is that it grew only with Democratic support. “Food stamps really owe a great deal to Republican presidents. George W. Bush expanded food stamps, particularly in the 2002 Farm Bill that restored eligibility for legal immigrants. Republicans like Nixon and Dole expanded the program. During the welfare reform debate of the 1990s, Republicans such as the moderate Senator Richard Lugar also stood up for food stamps.” The SNAP program served an average of 41.7 million Americans per month in 2024. Food producers and retailers also continued to benefit. UBS Analyst Michael Lasser estimated that Walmart derived about 4 percent of its U.S. sales from food stamp purchases in 2018. According to purchase-analysis data from 2025, Walmart captures about 24 percent of SNAP-household spending—underscoring how food assistance's relationship to large food-retail chains. Source of the articel

7 Polite Phrases That Are Still Worth Saying

May we borrow a moment of your time to review basic niceties? You might think you’ve heard them all before—because you have—but certain polite lingo is dropping out of the modern lexicon. That’s bad news for everyone, experts agree. “It’s really important to mind our manners—and I don't say that as a scold, but I do say it with encouragement,” says Lizzie Post, co-president of the Emily Post Institute (and great-great-granddaughter of renowned etiquette expert Emily Post). “It’s so amazing how good manners can make such an impact on other people’s days—and they catch like wildlife. That person holds the door for you, and you hold the door for the person behind you. It breaks the cycle of stress and rudeness and lack of awareness of others.” In order to coexist as peacefully as possible, we asked Post and other experts for a refresher on which polite words still matter the most—and why. “Hello!” When you walk into a coffee shop in the morning, your first words shouldn’t have anything to do with your order. Start your interaction with the barista with a friendly greeting—because “not acknowledging someone’s humanity before asking them something is pretty rude,” says Nick Leighton, who co-hosts the etiquette podcast Were You Raised By Wolves?  “Greetings in some places are so important—like in France, saying ‘Bonjour’ when you walk in a store is crucial,” he adds. “In America, we walk in and we’re like, ‘Oh, give me a croissant,’ and we don’t say hello first.”  This advice transcends interactions with customer-service workers: It’s also a good idea to get in the habit of saying hello to all of the coworkers you pass when you arrive at work every day, or, for example, the receptionist in your apartment lobby. “Please” Saying “please” transforms a demand into a request. “It acknowledges someone's choice of participation in something, and the impact that their participation might have on their own life,” Post says. It shows respect and consideration, and makes it clear that the other person has autonomy in whether they choose to oblige. Still, Post understands why, in some situations, people don’t say it. “I think we've leaned away from ‘please’ because we're worried that in so many of the text messages we send daily, it can come across like, ‘Please get this done,’ because any magic word can be said the wrong way,” she says. “You can do a sarcastic please or a non-genuine please. It's possible to make these words nasty with our tone, but when we don't—when we use them politely and positively—they have profound effects.” “Thank you” (with a caveat) Rita Kirk, a professor of corporate communications and public affairs at Southern Methodist University, invites lots of guest speakers to her class. After every visit, she instructs her students to write a thank-you letter—but before forwarding them to the recipient, she reads and grades each one. There’s an art to writing a good thank-you note, Kirk says, and rule No. 1 is that “thank you” should never be your very first words. Instead, explicitly express your gratitude by describing what the gift, insight, or time meant to you, and why you’re thankful for it. If you were sending thank-you notes after a baby shower, for example, you might write: “I cannot wait to see what the babe is going to look like in her new Western outfit. I promise to take a picture and send it to you. Thank you so much for the thoughtful gift!” Getting into the habit of sending thank-you notes can, literally, pay off. Kirk remembers one former student who sent her a note that started like this: “Damn you.” “It was pretty funny," she says. “She said that all those times in class when she had to write thank-you notes, she rolled her eyes and cursed my name.” Yet after graduating, the woman landed a job she really wanted, and eventually asked her employer why her name had risen to the top of the list. Her boss replied: “You were the only one who sent a thank-you note.” “May I?” This question is “the ultimate phrase of respect,” says Jacqueline Whitmore, an etiquette expert who founded the Protocol School of Palm Beach and author of Business Class: Etiquette Essentials for Success at Work. It’s permission-seeking rather than presumptive, which instantly softens the tone of any request, and it communicates deference and awareness of another person’s space or time. Plus, it’s versatile enough for all kinds of situations: Use it before giving a colleague feedback on their presentation, Leighton suggests, or when ordering a meal at a restaurant. One of his pet-peeves is ordering like this: “I’ll take the salmon.” Rephrasing as “May I order…” “definitely sounds less like, ‘Fetch me this,’” he says, a kindness your server will surely appreciate. “My pleasure” Whitmore always opts for “my pleasure” over the more transactional “you’re welcome." “It conveys joy in service—that the act of helping wasn’t a burden but a delight,” she says. Plus, “Rather than putting the spotlight on the other person—‘You're welcome’—you're taking ownership. It's my pleasure to do that for you.” Etiquette experts almost universally shy away from one common response to an expression of gratitude: “No problem.” “To me it sounds like there was a problem to begin with,” Whitmore says—and insinuates that someone's "thank you" is, in a way, an apology. There’s simply no need to bring even the idea of a problem into the exchange, she says. “Excuse me” or “pardon me” In some ways, these phrases are like mini-apologies, Post says. If you burp, you might follow-up with an “excuse me,” and if you inconvenience someone by asking them to pull their chair in so you can squeeze by, you might issue a quick “pardon me.” “They're both used to excuse a mistake or acknowledge an interruption,” she says. “It’s a way of acknowledging that our behavior might not be the most polite, or to get someone's attention.” These two simple words, Leighton adds, signal that you’re aware of and appreciate the fact that other people exist in the world. “We could all use a little more of that,” he says. “Friend” or “neighbor” Terms of endearment were once used far more liberally than they are now. People would address each other as “friend” or “neighbor,” or even, in church and other situations, “brother” or “sister.” These types of terms can be attached to any greeting, question, or remark: “Hey, neighbor! Want some apples?” Or: “Hey, friend, great to bump into you here.” “It makes both people feel good,” Kirk says. “The real message is that I see you and I value you, and those are not messages that we send very often to other people. We put up these walls to protect ourselves,” which doesn’t exactly foster a sense of community or connection. If we make an effort to address one another with kindness and affection, on the other hand, well-being will flourish. That, dear reader, is a mission worth pursuing. Source of the article