CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

I tried to make my phone as unaddictive as possible – here’s how you can too

The number of hours the average person spends attached to a device per week has soared over the past decade. Accidental screen junkie Helen Coffey implements a range of hacks to wean herself off her smartphone. Look, I need to make it clear that I never intended to become the kind of person so attached to their phone that I’d end up walking into a tree. Or a lamp post. Or a bollard. But it’s pointless to deny the truth. Admittedly, it was dark (and I was drunk) when the tree thing happened. It’s little consolation. The fact remains that I strolled right into a whacking great trunk while glued to my screen; I’ve still got the scars to prove it. The bollard one was even more humiliating – I ended up hitting it vulva-first in broad daylight while pawing furiously at WhatsApp. As for the lamp post... well, the less said about that, the better. Accidents aside, I’ve become increasingly anxious about the steady creep of phone addiction over the past 18 months. I often find myself scrolling without remembering when or why I started, suddenly “coming to” out of a fugue-like state with a fuzzy, cotton-wool head. A cosy night in watching a film is inevitably marred by my fractured attention span – I’ll spend half of it transfixed by the smaller screen clutched in my hand instead. I’m not alone. Average mobile phone use over the past decade has leapt from one hour and 17 minutes a day to three hours and 21 minutes, according to research from the Institute of Practitioners in Advertising (IPA); the daily average for looking at all types of screen (mobiles, laptops, tablets, games consoles and TV) is now almost 7.5 hours. It wouldn’t be such a big deal if we were happy about this state of affairs, but most of us categorically are not. Increased screen time has gone hand in hand with rising global depression rates over the past 20 years, while research has shown a correlation between problematic smartphone use and anxiety, depression, stress and decreased wellbeing. A survey by youth charity OnSide conducted last year found that, of the quarter of 11- to 18-year-olds who spent the majority of their free time outside school online, more than half (52 per cent) wanted to break their addiction and reduce their screen time. The majority just didn’t know how. Meanwhile, the popularity of so-called “dumb phones”, officially known as feature phones – stripped-back devices that you can use for texting and calling but not much else, like the old Nokia “brick” phones I cut my teenage teeth on back in the Noughties – has soared. According to consultancy CCS Insight, UK sales of these devices rose from 400,000 in 2023 to 450,000 last year, reports Reuters, while sales in western Europe climbed by 4 per cent the same year, totalling 215 million units. Having tried to use one myself for a week, I found that they’re simply not practical for most adults. The modern world requires smartphones at every turn, from essential authenticator apps to enable secure home working to the storing of digital train tickets and rail cards. I found myself forced to keep “cheating” by using my smartphone, which seemed to somewhat defeat the purpose. This time around, I decided to trial a whole host of hacks in a quest to slash my existing mobile usage – some employing tech, some deferring to analogue alternatives – to see whether I could develop healthier habits and free myself from spiralling addiction. Here’s what worked (and what didn’t). Measure your addiction Measurement is key to reduction. Think about it: how can you possibly cut down on your screen time if you don’t know what it was to begin with? The first and most obvious step, therefore, is to check where you’re currently at before tracking your progress. It’s easier than you might think – the location might be labelled differently depending on the model and make of smartphone, but there will be a section that allows access to some of your usage data. On my Samsung phone, for example, this information lives in “Settings” under “Digital wellbeing and parental controls”. It includes a weekly report about my average screentime and breakdowns of my most-used apps. At my first check, I averaged 2h 40m a day – less than the UK average, but far more than I’d thought possible, considering I already spend eight hours a day at a screen when working. You can also set yourself a daily target here. I put mine at 1h 30m, which turned out to be wildly optimistic – but shoot for the moon, land among the stars and all that. Impose limits In the same digital wellbeing area, you may be able to set yourself time limits for specific apps. Once the allotted time is up, you can’t use that app any more unless you manually add extra minutes. I decided to set restrictions for WhatsApp and Instagram, the two that commandeer the majority of my time. Of course, the trouble is that it’s incredibly easy to add more time – a matter of a few taps – something I repeatedly ended up doing for WhatsApp because otherwise you can’t receive new messages. (The notion of simply not receiving them until the next day proved too stressful for someone used to being permanently contactable.) But I do find the timer element somewhat helpful in that it puts the kibosh on my incessant Insta scrolling – and even when I do add time on WhatsApp, I’m aware it’s technically “borrowed”. I try to be more efficient with my correspondence, getting in and out like a ninja. Keep notifications at bay I’ve had my phone permanently set to silent mode – it doesn’t even vibrate – for the past five years to ensure I’m not constantly distracted by the “ding!” of endless messages and notifications. The only tangible downside is when you misplace your device and realise it can’t be located by simply calling it. I also started flipping my phone face-down, obscuring the screen, when out for dinner with friends – a symbol that I was present and spending intentional time with them, with no wish to put the whims of the digital world ahead of the real one. (A better practice might have been to put it away altogether – but baby steps.) This week, I tried implementing the screen-down technique elsewhere, particularly when working, to stop my eye from being drawn to the screen every time a new notification popped up. Although it works well enough when socialising, the temptation when alone is to just keep turning my phone over every five minutes with the justification that “I might have missed something”. For a true addict, this method sadly doesn’t touch the sides. Buy a (non-smart) watch This might sound laughably simple, but I’m convinced that, overall, it’s had the biggest impact. Previously, I would constantly turn to my phone to find out the time. Every time I did, I got sucked in. There would be a new email to read, a new DM to respond to, a new voicenote to listen to. Next thing you know, 20 minutes had been lost – and I still didn’t know what flipping time it was. This problematic smartphone habit was all but eradicated overnight for the price of a pre-loved Casio (£20 on Vinted, absolute bargain). And speaking of analogue timekeeping, employing an actual alarm clock instead of relying on your phone is a cheap and easy way of ensuring that staring at social media in horror as you contemplate the twisted, broken state of the world isn’tthe very first thing you do upon waking. Make it black and white The bright and jazzy colours on your phone screen are no accident; they’re very much there by design, optimised to set your brain’s pleasure centres alight. One of the most effective remedies, therefore, is to rob your screen of its seductive technicolour and swap rainbow for greyscale. One 2023 study found that this simple technique reduced participants’ daily screen time by 20 minutes at a stroke. The process is not always particularly intuitive and will be different depending on the device’s precise make and model – google your specific phone to find out how to implement it. On my Samsung, for example, I went to Settings, Accessibility, Visibility Enhancements, Colour Correction, then selected Greyscale and toggled it to “On”. The effect is instantaneous and arresting – like stepping from Oz into Kansas – and your phone immediately feels at least half as enticing, conjuring up drizzly Sunday afternoons spent watching black and white telly at an elderly relative’s house (I may be showing my age here). As it’s slightly annoying to have to go through this multi-step journey in reverse every time you want to look at a photo, for example – and then far too tempting not to switch back to greyscale – another top tip is to create a shortcut so that you can easily switch between the two modes at the touch of a button. This might sound unnecessarily lazy, but my main finding from all of this is that we are lazy. The easier you can make it not to use your phone, the better. Leave it elsewhere Clearly, the most surefire way to use your phone less is to be without it more: you can hardly use it if it’s not there. In fact, research has shown that the proximity of a device has a huge impact, not just on our usage but on our ability to perform other tasks. One study ran experiments in which participants had to complete cognitive tests. They were split into three groups: one had to leave their phones in their bags in another room; one could hang onto them but had to keep them out of sight; and the third were told to have them visible on the desk without succumbing to using or checking them. The group whose phones were in another room outperformed everyone else, while those who could see their phones fared far worse. The study authors concluded that “the mere presence of these devices reduces available cognitive capacity”. Leaving my phone in another room for a set period of time – for a couple of hours, for example, while I actually watched a film instead of falling prey to the weird, unfulfilling split-screen business – made me feel more relaxed and able to concentrate. Out of sight, out of mind, is the literal name of the game. The next step is to sometimes leave my device at home when I go out, especially if it’s a low-key excursion like nipping to the shops. But the fear that I might suddenly need my phone in an emergency – or, conversely, that someone else might have an emergency and need me – is still alarmingly deep-rooted. ‘Boringify’ your phone It may sound counterintuitive to suggest downloading more apps to help you spend less time on your phone – but this one’s a winner, I promise. There is a whole range of “minimalist launchers” whose sole purpose is to make your phone’s interface more boring. I go for a free one called Olauncher: it instantly swaps out all my jaunty app icons for a joyless list of their names. It’s incredible how much less tempting TikTok, Hinge and the like are when displayed as stark text; even the font, a charmless sans serif number, is something of a turn-off. The results After combining these techniques for a week, I find that my average screentime has dropped by 22 minutes a day to 2h 18m. More gratifyingly, on three days I manage to get it down to under 1h 50m, with my lowest recorded daily use at 1h 40m – 10 minutes off my target. The previous week, my heaviest use day saw me spend a truly horrifying five hours attached to my device; this week, my max has been 3h 15m, mostly the consequence of watching Netflix while stuck at the station awaiting a chronically delayed train. Now that I’ve started, the competitive part of me feels driven to continue paring back my screen time until I hit the absolute minimum necessary to live in the modern world. In the meantime, every second of my attention I manage to claw back from my phone – and every bollard I manage to avoid walking into along the way – feels like its own small but glorious victory. Source of the article

GOATReads: Psychology

Scientists Say These Daily Routines Can Slow Cognitive Decline

Brain experts already know that a number of habits can keep the brain in good shape. Exercise, a healthy diet, staying socially engaged, getting enough sleep, and maintaining heart health can all help slow cognitive decline. But most of that understanding comes from observational studies, which correlate people’s behaviors with outcomes while trying to account for other factors that may interfere with the results. While useful, these do not yield the type of solid scientific evidence that doctors like to act on and strongly recommend to their patients. A new study published in JAMA and presented at the Alzheimer’s Association International Conference, however, may finally provide the stronger evidence that doctors pass on to their patients. The study included more than 2,100 older adults—ages 60 to 79—who did not have symptoms of cognitive decline or dementia but who were at higher risk for them. Their risk factors included being sedentary, eating an unhealthy diet, having a family history of memory problems or a genetic predisposition for them, having heart disease risk factors, or belonging to certain ethnic groups that have a higher risk of developing dementia and Alzheimer’s disease. The two groups were randomly assigned to one of two lifestyle programs that ran for two years. One was more structured and involved 38 group sessions in which clinicians and participants set goals for meeting certain health benchmarks. It also included weekly online brain- training sessions, appointments to review lab test results with a clinician, and a $10 monthly rebate, provided by the U.S. Highbush Blueberry Council, for people to purchase blueberries. (Studies have linked the antioxidants in blueberries to slower cognitive decline.) The other less-structured group was provided general information about making brain-healthy changes, met in teams about three times a year, and was provided a $75 gift card at each meeting to spend as they wished on healthy behavior changes. Phyllis Jones, from Aurora, Ill., had a personal reason for joining the study. “I watched my mom with dementia for 8.5 years,” she says. “It was very difficult to watch. But what made it harder for me was that I had seen my mother caring for her mother. So I saw two generations ahead of me go through that.” Joining the study, she says, “gave me a way to try to break the cycle from happening again.”  At the end of the study, everyone improved on their cognitive function score, but the group following the more structured program showed significantly greater improvement. On average, those in the structured change group appeared to slow their cognitive decline by one to two years. “Because of the rigor of the trial, the size of the trial, and the care we took in rolling out the interventions, we now finally have scientific evidence that healthy behavior does matter,” says Laura Baker, professor of gerontology and geriatrics at Wake Forest University School of Medicine. “That’s new information. We all think we know it, but until now, we didn’t have the science.” Jones, who was part of the structured group, says that prior to the study, she was stressed out at work and not taking care of her health. “I was not exercising or eating the right things,” she says. The turning point for her, she says, occurred when her son said, “‘Mom, I didn’t expect to be a caregiver for you at this stage in your life, not yet.’ I knew I had to find a way out of this dark hole.” While the structured program required more commitment and accountability from the participants, Jones says she didn’t feel discouraged or frustrated with making the changes, since the study ramped them up gradually, and she had the support of other people in her group. “We had team meetings, and instead of starting out with the prescribed 30 to 35 minutes of exercise a day, four times a week, we started with 10 minutes a day and moved up from there,” she says. “If you are living a sedentary lifestyle, 10 minutes a day is a good start. And then if you can get through the 10 minutes, you can push to 20 minutes and then get to 30 minutes.” One of the women in Jones’ group was Patty Kelly, and the two inspired and motivated each other. Kelly, 81, says she struggled with weight most of her life, and weighed 130 pounds in the third grade. Like Jones, her mother was diagnosed with dementia at 77, and Kelly cared for her for seven years, watching as her mother gradually failed to recognize her family. “I wanted to make sure my sons did not have to go through that with me,” she says.   Both Jones and Kelly say they are different from when they began the study in many ways. Kelly says her driving has improved, which she attributes to the regular brain-training exercises she did on the computer as part of the study. “At this point, I’m living my life with my arms wide open,” says Jones, who is now working at a job she loves. “I’m just having a ball.”  The improvement occurred in people with the APOE4 genetic risk factor for Alzheimer’s disease, as well as those without the mutation—an encouraging indication of the power of lifestyle changes. Everyone experiences cognitive decline with age, Baker says. But the results provide reassuring evidence that not all cognitive decline that occurs with age is inevitable. The trial was a continuation of a similar study published in 2015 from Finland demonstrating similar cognitive benefits in people at risk of developing Alzheimer’s and other dementias who made behavior changes. The current trial was intended to involve a more diverse population at risk of not just Alzheimer’s, but the broader condition of dementia. “It doesn’t reverse the clock but it’s very clear that it can slow, and pull back the clock by one to two years,” she says. Baker and her colleagues determined this by relying on a global score to measure people’s cognitive state—the result of a compilation of different cognitive tests that neuropsychologists typically use to assess brain function. The suite of tests is designed to pick up even the smallest changes in cognitive function, something that cognitive tests to detect Alzheimer’s can’t do.  Baker says she plans to follow the people in the study for another four years to record what impact the behavior changes might have on the incidence of dementia and Alzheimer’s. About 30% of the people in the trial showed signs of amyloid, the hallmark of Alzheimer’s, in their brain scans, although they did not have any symptoms of memory loss or other cognitive deficits yet. Another third carried the APOE4 gene that increases the risk of Alzheimer’s, so Baker says following them will provide better insight into “how to improve cognition as a means of increasing resilience to decline.” Maria Carrillo, chief scientific officer at the Alzheimer’s Association (which supported and contributed to the design of the study), says that the follow-up could also include looking at the relationship between behavior changes and Alzheimer’s medications for people who have the disease to better understand how making such lifestyle changes early might impact the disease’s severity. The researchers are also eager to study how the popular GLP-1 weight loss drugs affect cognitive decline, since some early studies hint that the drugs may help to lower inflammation associated with Alzheimer’s disease. The rigor of the study means that doctors can, and should, start talking to their patients about making behavior changes to maintain their brain health, Carrillo says. “This could be something that if we are able to roll it out—which is our intent—through health systems, clinics and public health organizations, we could create something in which more and more individuals out there understand the impact that their everyday activities have in improving their health,” she says. “Nobody really thinks about cognitive function, and rarely do primary care doctors ask about it. This will change that.” The good news is that even those who made changes on their own experienced some slowing in cognitive decline. That also suggests that even if people adopt part of the behavior changes—starting with their diet, for example, if they aren’t able to exercise—they might still improve their brain health. “Anything is better than nothing,” says Baker. Jones views her new lifestyle as a positive example for her seven-year-old granddaughter, who knows that it’s not a good lunch unless it includes a salad. “She got that from grandma,” she says. She now volunteers as a community educator for the Alzheimer’s Association, helping people who are living through the same challenges she faced before joining the study.   It’s all about taking the first step, and now there’s strong evidence that the effort is worthwhile. “For people who are already doing these things, it gives them hope that they are saving their brain health, and that’s great,” says Carrillo. “For those who are halfway there, this encourages them to take it to the next level and exercise a little more or eat a little healthier. And for those on the other end of the spectrum—who aren’t following the program at all—it gives them hope that if they do certain things, they can indeed change their cognitive trajectory. It’s good news for everybody.” Source of article

When Does Capitalism Become Predatory?

Companies seek profits. That’s not illegal. But the law doesn’t dictate the bounds of ethics. When’s the last time you had an interaction with a company that made you question your sanity? Mine was with a long-term care insurance provider. My parents, both in their 80s, have paid for their policy for decades, so when my dad told me he’d been struggling — unsuccessfully — to submit a claim, I figured I could easily resolve the problem. Ha. Instead, I found myself tangled in a Kafkaesque bureaucracy that was clearly designed to maximize a customer’s frustration until they give up. Communications went into the void. Forms were repeatedly lost or rejected for absurd reasons — and even when the company allowed it had received them, it was months before they were accepted, much less processed. It was a textbook case of what I call predatory capitalism: companies’ increasingly common willingness to harm customers in their pursuit of short-term profits. It’s a notorious issue in the insurance industry (remember the wave of public rage directed at insurers after the killing of UnitedHealthCare CEO Brian Thompson?), but it appears elsewhere as well. Take private equity-owned nursing homes, for example; research has found that when a PE firm takes over, the quality of patient care decreases, as do quality ratings and patient mobility. Meanwhile, pain levels and short-term mortality increase. Beyond health care, consider online gambling — particularly sports betting. In addition to draining the savings of those who can least afford it, these companies have been accused of using sophisticated analytics to target people with gambling disorders. In fact, researchers say problem gamblers account for a hugely disproportionate fraction of online gambling revenue. The result? A University of California San Diego study found that when online sportsbooks enter a state, internet searches seeking help for gambling addiction go up by 61%. Companies seek profits. That’s their job. And nothing I’ve described here is illegal. But the law doesn’t dictate the bounds of ethics — particularly when companies have vast political power to shape the law for their own benefit. When businesses take advantage of that legal leeway to harm their customers, people’s faith in the system and in corporate leadership erodes. Some companies have always acted badly, but the public’s attitude towards business has changed radically in a generation. Gallup polling shows that the percentage of Americans who have “a great deal” or “a lot” of confidence in big business has plunged from 30% in 1990 to 16% in 2024. Why are businesses now so routinely predatory? It’s rooted in the “financialization” of the American economy. As the financial sector has gotten larger and more powerful, its expectations and demands have changed the behavior of companies in all parts of the economy. That means more companies have made outperforming the market — and delivering the best short-term results — their highest priority. Businesses that want to improve their numbers to fulfill those demands have, at the highest level, two choices. They can create new customers, or they can squeeze more out of the ones they already have. The first is usually difficult. It requires innovation, inspiration or both. The leaders who can consistently do so become legends — think Steve Jobs or Henry Ford. The second is easier. A lot easier if you’re willing to squeeze people who can’t push back, like senior citizens. Might customers who feel cheated or abused abandon you eventually? Sure, but by the time that happens, you’ll have banked the necessary profit — and the fallout will be your successor’s problem. Aggregate that attitude across the entire economy and you get rising corporate profits, a booming stock market, low unemployment and a population that increasingly feels like corporate leaders and capitalism see it as prey. Leaders who are shocked by Americans’ distrust (and the attendant rise of politicians preaching socialist politics) might be well-advised to look at many of their fellow CEOs for an explanation. If they want to restore faith in the system, they can start with their own actions. A few suggestions: Stop saying that your only responsibility is to maximize shareholder value. As a matter of law, it’s not true. And it serves as an excellent excuse for any kind of predatory behavior. Acknowledge that some regulation — even regulation that hurts short-term profits — is necessary and even beneficial in the long run. Companies are measured against their competitors. If your rival is goosing its numbers through predation, it puts enormous pressure on you to do the same. Regulation can solve that problem for you, and business leaders should be unafraid to call for banning behavior that they know is morally unacceptable, however profitable it might seem. Act as stewards, not predators. A predator feasts and leaves nothing behind. A steward’s chief responsibility is to leave things better than he or she found it. Being a corporate steward means building a company that will endure — and requires an honest relationship with customers. Predators are feared and despised. Stewards are honored and loved. Which one would you rather be? Source of the article

GOATReads:Politics

What Vietnam’s Scarred Lands Reveal About Modern Warfare

Fifty years on, Vietnam is still reckoning with the long-term ecological toll of U.S. warfare—a grim warning as Israel and Russia unleash similar destruction in Gaza and Ukraine. WHEN THE VIETNAM WAR finally ended on April 30, 1975, it left behind a landscape scarred with environmental damage. Vast stretches of coastal mangroves, once housing rich stocks of fish and birds, lay in ruins. Forests that had boasted hundreds of species were reduced to dried-out fragments, overgrown with invasive grasses. The term “ecocide” had been coined in the late 1960s to describe the U.S. military’s use of herbicides like Agent Orange and incendiary weapons like napalm to battle guerrilla forces that used jungles and marshes for cover. Fifty years later, Vietnam’s degraded ecosystems and dioxin-contaminated soils and waters still reflect the long-term ecological consequences of the war. Efforts to restore these damaged landscapes and even to assess the long-term harm have been limited. As an environmental scientist and anthropologist who has worked in Vietnam since the 1990s, I find the neglect and slow recovery efforts deeply troubling. Although the war spurred new international treaties aimed at protecting the environment during wartime, these efforts failed to compel post-war restoration for Vietnam. Current conflicts in Ukraine and the Middle East show these laws and treaties still aren’t effective. AGENT ORANGE AND DAISY CUTTERS The U.S. first sent ground troops to Vietnam in March 1965 to support South Vietnam against revolutionary forces and North Vietnamese troops, but the war had been going on for years before then. To fight an elusive enemy operating clandestinely at night and from hideouts deep in swamps and jungles, the U.S. military turned to environmental modification technologies. The most well-known of these was Operation Ranch Hand, which sprayed at least 19 million gallons of herbicides over approximately 6.4 million acres of South Vietnam. The chemicals fell on forests and also on rivers, rice paddies, and villages, exposing civilians and troops. More than half of that spraying involved the dioxin-contaminated defoliant Agent Orange. Herbicides were used to strip the leaf cover from forests, increase visibility along transportation routes, and destroy crops suspected of supplying guerrilla forces. As news of the damage from these tactics made it back to the U.S., scientists raised concerns about the campaign’s environmental impacts to then-President Lyndon Johnson, calling for a review of whether the U.S. was intentionally using chemical weapons. American military leaders’ position was that herbicides did not constitute chemical weapons under the Geneva Protocol, which the U.S. had yet to ratify. Scientific organizations also initiated studies within Vietnam during the war, finding widespread destruction of mangroves, economic losses of rubber and timber plantations, and harm to lakes and waterways. In 1969, evidence linked a chemical in Agent Orange, 2,4,5-T, to birth defects and stillbirths in mice because it contained TCDD, a particularly harmful dioxin. That led to a ban on domestic use and suspension of Agent Orange use by the military in April 1970, with the last mission flown in early 1971. Incendiary weapons and the clearing of forests also ravaged rich ecosystems in Vietnam. The U.S. Forest Service tested large-scale incineration of jungles by igniting barrels of fuel oil dropped from planes. Particularly feared by civilians was the use of napalm bombs, with more than 400,000 tons of the thickened petroleum used during the war. After these infernos, invasive grasses often took over in hardened, infertile soils. “Rome Plows,” massive bulldozers with an armor-fortified cutting blade, could clear 1,000 acres a day. Enormous concussive bombs, known as “daisy cutters,” flattened forests and set off shock waves killing everything within a 3,000-foot radius, down to earthworms in the soil. The U.S. also engaged in weather modification through Project Popeye, a secret program from 1967 to 1972 that seeded clouds with silver iodide to prolong the monsoon season in an attempt to cut the flow of fighters and supplies coming down the Ho Chi Minh Trail from North Vietnam. Congress eventually passed a bipartisan resolution in 1973 urging an international treaty to prohibit the use of weather modification as a weapon of war. That treaty came into effect in 1978. The U.S. military contended that all these tactics were operationally successful as a trade of trees for American lives. Despite Congress’ concerns, there was little scrutiny of the environmental impacts of U.S. military operations and technologies. Research sites were hard to access, and there was no regular environmental monitoring. RECOVERY EFFORTS HAVE BEEN SLOW After the fall of Saigon to North Vietnamese troops on April 30, 1975, the U.S. imposed a trade and economic embargo on all of Vietnam, leaving the country both war-damaged and cash-strapped. Vietnamese scientists told me they cobbled together small-scale studies. One found a dramatic drop in bird and mammal diversity in forests. In the A Lưới valley of central Vietnam, 80 percent of forests subjected to herbicides had not recovered by the early 1980s. Biologists found only 24 bird and five mammal species in those areas, far below normal in unsprayed forests. Only a handful of ecosystem restoration projects were attempted, hampered by shoestring budgets. The most notable began in 1978, when foresters began hand-replanting mangroves at the mouth of the Saigon River in Cần Giờ forest, an area that had been completely denuded. In inland areas, widespread tree-planting programs in the late 1980s and 1990s finally took root, but they focused on planting exotic trees like acacia, which did not restore the original diversity of the natural forests. CHEMICAL CLEANUP IS STILL UNDERWAY For years, the U.S. also denied responsibility for Agent Orange cleanup, despite the recognition of dioxin-associated illnesses among U.S. veterans and testing that revealed continuing dioxin exposure among potentially tens of thousands of Vietnamese. The first remediation agreement between the two countries only occurred in 2006, after persistent advocacy by veterans, scientists, and nongovernmental organizations led Congress to appropriate US$3 million for the remediation of the Da Nang airport. That project, completed in 2018, treated 150,000 cubic meters of dioxin-laden soil at an eventual cost of over US$115 million, paid mostly by the U.S. Agency for International Development, or USAID. The cleanup required lakes to be drained and contaminated soil, which had seeped more than 9 feet deeper than expected, to be piled and heated to break down the dioxin molecules. Another major hot spot is the heavily contaminated Biên Hoà airbase, where local residents continue to ingest high levels of dioxin through fish, chicken, and ducks. Agent Orange barrels were stored at the base, which leaked large amounts of the toxin into soil and water, where it continues to accumulate in animal tissue as it moves up the food chain. Remediation began in 2019; however, further work is at risk with the Trump administration’s near elimination of USAID, leaving it unclear if there will be any U.S. experts in Vietnam in charge of administering this complex project. LAWS TO PREVENT FUTURE “ECOCIDE” ARE COMPLICATED While Agent Orange’s health effects have understandably drawn scrutiny, its long-term ecological consequences have not been well studied. Current-day scientists have far more options than those 50 years ago, including satellite imagery, which is being used in Ukraine to identify fires, flooding, and pollution. However, these tools cannot replace on-the-ground monitoring, which often is restricted or dangerous during wartime. The legal situation is similarly complex. In 1977, the Geneva Conventions governing conduct during wartime were revised to prohibit “widespread, long term, and severe damage to the natural environment.” A 1980 protocol restricted incendiary weapons. Yet oil fires set by Iraq during the Gulf War in 1991, and recent environmental damage in the Gaza Strip, Ukraine, and Syria indicate the limits of relying on treaties when there are no strong mechanisms to ensure compliance. Some countries have adopted their own ecocide laws. Vietnam was the first to legally state in its penal code: “Ecocide, destroying the natural environment, whether committed in time of peace or war, constitutes a crime against humanity.” Yet the law has resulted in no prosecutions, despite several large pollution cases. Both Russia and Ukraine also have ecocide laws, but these have not prevented harm or held anyone accountable for damage during the ongoing conflict. LESSONS FOR THE FUTURE The Vietnam War is a reminder that failure to address ecological consequences, both during war and after, will have long-term effects. What remains in short supply is the political will to ensure that these impacts are neither ignored nor repeated. Sourse of the article

GOATReads:Sociology

How the Fantastic Four Shaped the Future of Superheroes

When The Fantastic Four: First Steps premieres this week, it will mark the return to prominence of four heroes not just foundational to Marvel and its ever-expanding empire of comics, movies, and television shows, but to modern pop culture and storytelling. The Fantastic Four, a tight-knit family with strange powers, were created by comic industry veterans Stan Lee and Jack Kirby in 1961. The comic, with its bickering heroes and setting in New York City, defied genre conventions and offered a radically different vision of superheroes than the staid, righteous Superman and Batman. Immediately successful, the Fantastic Four birthed modern Marvel comics and its vast, interrelated web of heroes and villains spanning more than 35,000 issues to date. It also created the template for the modern superhero—irreverent and wise-cracking, but flawed and vulnerable. From the Fantastic Four, the Marvel style of superheroics multiplied, yielding Spider-Man, the Hulk, the X-Men, and Iron Man, among many others. Inevitably, the Marvel brand of superhero narrative leapt from the printed page to other media, first cartoons, then television and on to the movies. The Fantastic Four didn’t just pave the way for the Marvel Cinematic Universe, a 37-film behemoth that has grossed $31.9 billion, but also seven Superman movies ($2 billion and counting), 13 X-Men movies ($2.49 billion), the Dark Knight Trilogy ($1.12 billion) and dozens of others. Beyond the superhero genre, it’s hard to watch franchises like Star Wars and the Fast & the Furious, with their bickering, misfit heroes, without seeing traces of the Fantastic Four’s DNA. “The Fantastic Four were always the heart and soul and center of the Marvel universe and the Marvel universe has inspired so many creative people in so many different ways,” says Tom DeFalco, the former editor-in-chief of Marvel who wrote 60 issues of the Fantastic Four comic in the 1990s. On and off the silver screen For characters so integral to Marvel and its history, the Fantastic Four has been noticeably absent from its cinematic universe. That’s largely a result of misguided deals made in the 1990s, when a cash-strapped Marvel sold off the movie rights to its top-tier characters, including Spider-Man, the X-Men, and the Fantastic Four. While Spider-Man and the X-Men both enjoyed some success in their early 2000 movies, Fantastic Four fans were not as fortunate, with a pair of joke-heavy movies released in 2005 and 2007 to mostly poor reviews, and a disastrous 2015 reboot that made the first two shine in comparison. The Fantastic Four comic has also faded in and out. Starting out as Marvel’s flagship comic in the 1960s, it sputtered in the 1970s before taking off again in the 1980s. The comic drew critical acclaim under writer Jonathan Hickman in the early 2010s, before disappearing entirely from 2015 to 2018, allegedly to deny Fox any free publicity for its movie. Marvel regained the rights to the Fantastic Four (as well as the X-Men) when Disney acquired Fox’s film studio in 2019, and the comic, currently written by Ryan North, has been on a recent upswing. Despite that checkered history, C.B Cebulski, Marvel’s editor-in-chief, says the company has never wavered in its commitment to the Fantastic Four comic and the title will enjoy extra attention in the wake of the movie release. “From my point of view, the FFs been the core,” Cebulski says “They’ve been the core in publishing. What's happened outside of publishing was never really a concern to me. But we've always focused our best efforts on making sure those four —Reed, Johnny, Ben, and Sue — were somehow featured in the best possible light every year since I've been at Marvel and before.” The story of the Fantastic Four It’s hard to imagine now, in this era of superhero ubiquity, but there was a time when costumed crusaders had all but vanished from the cultural landscape. Modern superheroes were born in comic books in the late 1930s and they headlined dozens of titles throughout the 1940s. Fueled by patriotic stories, circulations soared, with some titles selling more than a million issues annually. But by the mid-1950s, super heroes had all but vanished from newsstands, a result of changing tastes and a paranoid, Cold War-fueled campaign to protect children from harmful influences. The catalyst was Seduction of the Innocent, a 1954 book by psychiatrist Frederick Wertham that argued American children were being led into juvenile delinquency by lurid and violent comics. Wertham’s book led to a Congressional inquiry, led by Senator Estes Kefauver of Tennessee, best known for his investigations into organized crime, and the blacklisting of dozens of comic creators. It also led the comic book industry to create the Comics Code Authority, a self-regulating body that prohibited titles with the words “Horror” and “Terror,” banned any mention of the occult, and insisted that in comic books, law enforcement must always be treated with respect and crime should never pay. As part of this self-censoring regime, comic publishers purged their lines of most superheroes, leaving western, romance, and humor comics. A handful of heroes remained, mostly stalwarts like Superman and Batman, but their stories were wan and gimmicky, far from the action-packed tales of the previous decades. Out of this parched environment, came the Fantastic Four. Unlike their relatively simple origin in the comic—a brilliant scientist, his best friend, his girlfriend and her kid brother go into space and are bombarded by cosmic rays—the creation of the Fantastic Four title is shrouded in mystery, controversy, and litigation. One version says Marvel’s publisher, inspired by the success of rival DC’s newly launched team book, the Justice League of America, demanded his own version. Another says Stan Lee, frustrated by years of toil churning out uninspiring comics, was prompted by his wife to try something new that would excite him. Another version assigns all the creative credit to Jack Kirby, a brilliant artist and storyteller who shunned the spotlight as much as Lee craved it. Most industry observers agree both Lee and Kirby made important contributions, but precisely who did what remains unknown. But for the next 101 issues, the two would work together, with Kirby largely coming up with plots and drawing the stories, while Lee added his distinctive dialogue and feverishly marketed the title. The eventual addition of legendary inker Joe Sinnott completed the package. For all that was revolutionary about the Fantastic Four, there is little about the characters’ powers that is original. Mr. Fantastic’s stretching ability mimicked Plastic Man, the Human Torch was a retread of a 1940s character with the same name, the powers of the Invisible Girl (as she was first known) date at least to H.G. Wells, and the Thing resembles any number of monsters. And collectively, as a team of uniformed adventurers with cool sci-fi gizmos, they looked a lot like the Challengers of the Unknown, a team created by Kirby for DC in 1957. Instead, the inventiveness came from the characters and their interactions. In the first issue, the Thing, (understandably) dismayed at becoming a monster, lashes out at the others. By issue three, the teenaged Human Torch quits the team in a huff. In issue eight, it’s the Thing who quits. There’s also humor, pop-culture references, and lots of action. For young comic readers, this was a radical departure from what they were reading elsewhere. “The DC characters embraced authority, they were do-gooders, like the police who would come to your school and give a lecture,” says Jim Salicrup, who edited the title in the 1980s. “There was a certain primal quality to Marvel characters.” Making the Fantastic Four unique among super teams is their family dynamic. While the members of other teams come and go, the Fantastic Four are, for better or worse, stuck with each other. “They all are really closely tied together, by the original events that conspired to make them into the Fantastic Four. And they all went through it and they all got handed different cards in the deck,” says Walter Simonson, who wrote and drew the comic in the early 1990s. “They're not people or characters from different origins and different places that get together and say, ‘Hey, let's fight crime.’” According to Hickman, who wrote the Fantastic Four from 2009 and 2012, early drafts of the First Steps script missed that critical element. “One of the notes I gave the studio was, ‘This is excellent. It's very cool. I love this story, but here's the problem: It's about a superhero team and not a family.’” (He says subsequent drafts fixed it). After the initial success of the Fantastic Four comic, Lee quickly began adding new superheroes to the Marvel lineup, often working with Kirby, and busily cross-pollinating the titles. A year after the Fantastic Four debuted, they appeared on the cover of Amazing Spider-Man No. 1. The Hulk appeared in Fantastic Four 12. The Avengers brought five heroes together. The comics all contained letter pages, where fans debated the finer points of plots and characters, while Lee’s monthly columns relentlessly promoted the lineup. A fan club soon followed. Readers ate it up. “It was like joining a benevolent cult,” Salicrup says. By the end of the 1960s, the Marvel style of storytelling had spread to DC, whose heroes began to wrestle with real-world issues like racism and drug addiction. And Lee and Kirby continued to crank out their stories, introducing characters as varied and memorable as the Black Panther, Dr. Doom, Nick Fury, and Thor. That sustained decade of creativity is unmatched in comics, and was the result of the alchemy between Lee and Kirby, says Hickman. “There are people who believe that you should swing for the fence every time,” Hickman says. “That ideas are not a non-renewable resource, that it's a self perpetuating machine, that the more that you add to it, the more you get out of it. And I think people like that are prone to be able to do massive sprawling works of art. Those guys just happen to be those kinds of creators at the origin of what is a North American superhero industry. And we are so fortunate that we had those guys at the helm of the ship.” Source of the article

Ancient site stirs heated political debate on India's past

The Keeladi village in India's southern Tamil Nadu state has unearthed archeological finds that have sparked a political and historical battle. Amid coconut groves, a series of 15ft (4.5m) deep trenches reveal ancient artefacts buried in layers of soil - fragments of terracotta pots, and traces of long-lost brick structures. Experts from the Tamil Nadu State Department of Archaeology estimate the artefacts to be 2,000 to 2,500 years old, with the oldest dating back to around 580 BCE. They say these findings challenge and reshape existing narratives about early civilisation in the Indian subcontinent. With politicians, historians, and epigraphists weighing in, Keeladi has moved beyond archaeology, becoming a symbol of state pride and identity amid competing historical narratives. Yet history enthusiasts say it remains one of modern India's most compelling and accessible discoveries - offering a rare opportunity to deepen our understanding of a shared past. Keeladi, a village 12km (7 miles) from Madurai on the banks of the Vaigai river, was one of 100 sites shortlisted for excavation by Archaeological Survey of India (ASI) archaeologist Amarnath Ramakrishnan in 2013. He selected a 100-acre site there because of its proximity to ancient Madurai and the earlier discovery of red-and-black pottery ware by a schoolteacher in 1975. Since 2014, 10 excavation rounds at Keeladi have uncovered over 15,000 artefacts - burial urns, coins, beads, terracotta pipes and more - from just four of the 100 marked acres. Many are now displayed in a nearby museum. Ajay Kumar, leading the state archaeology team at Keeladi, says the key finds are elaborate brick structures and water systems - evidence of a 2,500-year-old urban settlement. "This was a literate, urban society where people had separate spaces for habitation, burial practices and industrial work," Mr Kumar says, noting it's the first large, well-defined ancient urban settlement found in southern India. Since the Indus Valley Civilisation's discovery in the early 1900s, most efforts to trace civilisation's origins in the subcontinent have focused on northern and central India. So, the Keeladi finds have sparked excitement across Tamil Nadu and beyond. William Daniel, a teacher from neighbouring Kerala, said the discoveries made him feel proud about his heritage. "It gives people from the south [of India] something to feel proud about, that our civilisation is just as ancient and important as the one in the north [of India]," he says. The politics surrounding Keeladi reflects a deep-rooted north-south divide - underscoring how understanding the present requires grappling with the past. India's first major civilisation - the Indus Valley - emerged in the north and central regions between 3300 and 1300 BCE. After its decline, a second urban phase, the Vedic period, rose in the Gangetic plains, lasting until the 6th Century BCE. This phase saw major cities, powerful kingdoms and the rise of Vedic culture - a foundation for Hinduism. As a result, urbanisation in ancient India is often viewed as a northern phenomenon, with a dominant narrative that the northern Aryans "civilised" the Dravidian south. This is especially evident in the mainstream understanding of the spread of literacy. It is believed that the Ashokan Brahmi script - found on Mauryan king Ashoka's rock edicts in northern and central India, dating back to the 3rd Century BCE - is the predecessor of most scripts in South and Southeast Asia. Epigraphists like Iravatham Mahadevan and Y Subbarayalu have long held the view that the Tamil Brahmi script - the Tamil language spoken in Tamil Nadu and written in the Brahmi script - was an offshoot of the Ashokan Brahmi script. But now, archaeologists from the Tamil Nadu state department say that the excavations at Keeladi are challenging this narrative. "We have found graffiti in the Tamil Brahmi script dating back to the 6th Century BCE, which shows that it is older than the Ashokan Brahmi script. We believe that both scripts developed independently and, perhaps, emerged from the Indus Valley script," Mr Kumar says. Epigraphist S Rajavelu, former professor of marine archaeology at the Tamil University, agrees with Mr Kumar and says other excavation sites in the state too have unearthed graffiti in the Tamil Brahmi script dating back to the 5th and 4th Century BCE. But some experts say that more research and evidence are needed to conclusively prove the antiquity of the Tamil Brahmi script. Another claim by the state department of archaeology that has ruffled feathers is that the graffiti found on artefacts in Keeladi is similar to that found in the Indus Valley sites. "People from the Indus Valley may have migrated to the south, leading to a period of urbanisation taking place in Keeladi at the same time it was taking place in the Gangetic plains," Mr Kumar says, adding that further excavations are needed to fully grasp the settlement's scale. But Ajit Kumar, a professor of archaeology at Nalanda University in Bihar, says that this wouldn't have been possible. "Considering the rudimentary state of travel back then, people from the Indus Valley would not have been able to migrate to the south in such large numbers to set up civilisation," he says. He believes the finds in Keeladi can be likened to a small "settlement". While archaeologists debate the findings, politicians are already drawing links between Keeladi and the Indus Valley - some even claim the two existed at the same time or that the Indus Valley was part of an early southern Indian, or Dravidian, civilisation. The controversy over ASI archaeologist Mr Ramakrishnan's transfer - who led the Keeladi excavations - has intensified the site's political tensions. In 2017, after two excavation rounds, the ASI transferred Mr Ramakrishnan, citing protocol. The Tamil Nadu government accused the federal agency of deliberately hindering the digs to undermine Tamil pride. The ASI's request in 2023 for Mr Ramakrishnan to revise his Keeladi report - citing a lack of scientific rigour - has intensified the controversy. He refused, insisting his findings followed standard archaeological methods. In June, Tamil Nadu Chief Minister MK Stalin called the federal government's refusal to publish Mr Ramakrishnan's report an "onslaught on Tamil culture and pride". State minister Thangam Thennarasu accused the Bharatiya Janata Party (BJP)-led federal government of deliberately suppressing information to erase Tamilian history. India's Culture Minister Gajendra Singh Shekhawat has now clarified that Mr Ramakrishnan's report has not been rejected by the ASI but is "under review," with expert feedback yet to be finalised. Back at the the Keeladi museum, children explore exhibits during a school visit while construction continues outside to create an open-air museum at the excavation site. Journalist Sowmiya Ashok, author of an upcoming book on Keeladi, recalls the thrill of her first visit. "Uncovering history is a journey to better understand our shared past. Through small clues - like carnelian beads from the northwest or Roman copper coins - Keeladi reveals that our ancestors were far more connected than we realise," she says. "The divisions we see today are shaped more by the present than by history." Source of the article

GOATReads:Sociology

Albania’s Waste Collectors and the Fight for Dignity

An anthropologist shines a light on Romani and Egyptian recyclers whose work has been made illegal, calling for a new way of viewing humanity’s garbage. One afternoon on my way home from work in Tirana, Albania, I came across a Romani boy dragging his feet and complaining to his family walking in front of him. At first sight, it could have seemed like a normal occurrence—a teenager being forced into a family activity. Except this family was headed to pick through the trash.   “Nuk dua të mbledh kanoçe!” the boy said. (I don’t want to pick up cans.) The interaction made me turn my head. The family’s home at the end of a barely noticeable alley looked like a make-do assemblage of brick walls and roofing sheets. But the boy would clearly rather be there. It felt unjust to see a teenager pressured into labor and a family forced into illegal work. Yet this work provides a valuable service in a country that lacks adequate recycling infrastructure.   In Albania, picking up cans is the daily reality for many Romani and Egyptian people. These communities are among the only minorities of color in a country whose borders were closed for more than 40 years, only opening to outsiders in the 1990s. Facing constant discrimination, Romani and Egyptians have few employment opportunities outside of picking through the trash for recyclables.  But in 2011, Albania passed a law deeming all garbage thrown into collection bins to be the property of each local municipality. This made it illegal for Romani and Egyptians to sort through the trash.  The plight of Romani and Egyptians in Albania provides a window into the lives of the estimated 20 million to 56 million waste pickers around the world. These informal workers are responsible for more than half of plastic recycling globally, helping to curb greenhouse gases and reduce plastic pollution in oceans. Yet in most countries, these essential workers operate in a legal gray area, and in some places their labor is illegal. Who owns trash, and when does trash change ownership? The experiences of these recyclers in Albania’s capital also shed light on the complicated network of people who process humanity’s 2 billion-plus tons of garbage each year, from municipal workers to the mafia to the multitudes living on the margins.  Their stories raise several vital questions: Who does garbage belong to? Who is responsible for recycling? How can waste pickers be legally integrated into the circular economy? And what happens when people turn their backs on their trash—and the people who process it? THE ROLE OF ROMANI AND EGYPTIAN RECYCLERS Like many places, Albania lacks a proper recycling system. The vast majority of residents do not sort recyclable materials from general household waste. Seeing how difficult trash picking is and how important it is to the environment, I decided to separate my recyclables in dedicated plastic bags. Anytime I encounter a trash picker while throwing out my garbage, I ask them if they’re looking to recycle and hand them my recycling bag. Every time, I get a sincere thank you back, sometimes from a person waist-deep in trash inside a bin.  As time has passed, I’ve gotten to know Bujari (a pseudonym), the Romani man who most regularly collects plastic and metals in the bins near my house. I frequently see him standing with his family, waiting for people to finish throwing away their trash.  I once asked Bujari if he ever thought of getting other work. He nodded his head while looking at the ground. When I inquired further, he said every time he tried to get a job, he was mistreated, so it’s better for him to work on his own. Autonomy, in whatever forms it takes, is very important for Romani communities and has helped keep their culture alive. The communist regime that ruled Albania from 1946 to 1991 did not achieve much toward integrating Romani communities. In fact, according to scholars and many Romani people, the communist dictatorship undertook considerable unsuccessful efforts to erase Romani culture under the guise of integration. Refusing to participate in the dictatorship’s sociocultural life, Romani and Egyptians were relegated to the impoverished margins of cities and villages, similarly to politically persecuted people of the time. Out of sight, they were ignored and demonized as thieves, witches, and generally unclean. Unfortunately, nowadays very little has been done at the institutional level to create safe spaces for these communities and to place due value on their ways of life. As gentrification increases, Romani people are forcibly relocated in an attempt to render the city “sterile.”  Because so few jobs are available for Romani and Egyptians in Albania, people often work in trash disposal and street sweeping. The latter is regulated, and street sweepers are paid a minimum wage salary. But waste pickers are much more vulnerable. That’s partly because they depend on others’ trash. It’s also in part because handpicking undifferentiated garbage has reinforced stereotypes about their bodies spreading disease.  And it’s largely because their work has been made illegal. THE MESS CREATED BY A GARBAGE LAW For many years, Albania has engaged in nationwide reforms as part of its application for European Union candidate country status. As a result, Albanian entrepreneurs assumed the country would start recycling, especially in the face of catastrophic climate change. To prepare for the anticipated shift, businesspeople built private plants for recycling plastic and metals.  In addition, as a prospective member of the EU, Albania must adopt a legal framework similar to that of EU countries and enforce EU-style laws. But there were concerns that Albanian recycling plants were importing trash from the Italian mafia. Though the government at the time dismissed these claims, the concerns were legitimate. Italy is just a ferry ride away, across the Adriatic Sea, from Albania. And organized crime does have its hands in garbage management around the world. After all, the industry is legal, lucrative, and easy to get into. It’s also often overlooked. People don’t want to think about their trash, which makes garbage processing prone to corruption and human rights issues.  Due to outside pressure to prevent the mafia from sending trash to Albania, the country passed a 2011 law that regulated garbage processing and recycling, and deemed trash the property of each municipality. The government eventually decided not to implement a countrywide recycling program. Instead, it invested in three incinerators that would burn landfill waste. But the plants were never built, and the only thing burned up was Albanians’ money.  The incinerator initiative turned out to be a corrupt affair that purportedly funnels hundreds of millions of euros of Albanian taxpayer money into sketchy companies with Panamanian accounts. The then-minister of the environment was put in prison. The then-minister of finance accused the prime minister of having ties to the mafia and is now thought to be residing in Switzerland.  As a result of the government’s mismanagement and corruption, today only 10 to 18 percent of Albania’s municipal waste is collected for recycling. Most of that work is done by Romani and Egyptian workers. These informal recyclers contribute significantly to the country’s recycling system by collecting and selling materials to the owners of recycling plants. Though the primary objective of the 2011 law was not to render the working conditions of waste pickers unsafe, it managed to do precisely that. Instances of police physically abusing trash pickers have circulated in the media. In some cases, the public has denounced this form of police brutality and raised issues of human rights violations.  At present, Romani and Egyptian trash collectors are not jailed. But they are at the mercy of the police, who often confiscate their daily collections and/or their means of transportation, despite the court deeming this to be indirect discrimination on the basis of financial difficulty and ethnicity. ESSENTIAL WORK IN AN ILLEGAL SYSTEM One day I was talking with Bujari about his work in this precarious system. He told me the best times to collect trash are early Saturday and Sunday mornings, after people have been partying. And the best place to collect is the nightlife hotspot area Bllok. But he stared at the asphalt and indicated with his expression that picking trash in such a central area is an unattainable ideal, as police forces keep trash pickers away from the tourist gaze.  Romani and Egyptian waste pickers must stay away from the prying eyes of the police. They mainly collect in neighborhoods with low traffic, in gentrifying areas where garbage overflows due to high-density building complexes with few spaces available for trash cans, in the outskirts of Tirana, and at landfills. Amid this system, new forms of social solidarity and social norms have emerged. Bujari emphasized that it is important for him to make connections with people in the neighborhood where he collects, in case they might help him if the police show up. Residents have started to differentiate recyclables at home and dispose of them in separate bags. Wherever trash pickers place huge white sacks for plastic near trash cans, people have made it a habit to throw recyclables in them.  Though Tirana’s government doesn’t provide recycling trucks, the Romani community effectively does. I told Bujari I have seen the white truck operated by Romani people that collects the huge sacks of plastic bottles and where I have noticed it. Bujari politely switched the topic to avoid talking about an activity that exists in a legal gray area.  Despite setbacks and persecution, the amount of recycling these reclaimers participate in is astonishing. According to Bujari and other trash pickers, they typically earn about 700 to 800 Albanian Lekë (US$7.59 to US$8.68) per day for collecting more than 1,000 cans and plastic bottles. Clearly, considerable change in all institutions in Albania is long overdue. But waste mismanagement and unfair treatment of informal recyclers is a global problem that requires rethinking the entire system. REIMAGINING WASTE AND RECYCLING In the 1960s, renowned anthropologist Mary Douglas tackled the issue of dirt in her book Purity and Danger: An Analysis of Concepts of Pollution and Taboo. Dirt is “matter out of place,” she wrote. When describing the Hindu caste system, she commented, “The whole system represents a body in which by the division of labor the head does the thinking and praying, and the most despised parts carry away waste matter.” Similarly, Albanian policies consider Romani and Egyptians as people out of place. And systems that make waste picking illegal treat these essential workers as among the most despised.  Social anthropologist Patrick O’Hare provides a beautifully detailed alternative view in his book Rubbish Belongs to the Poor: Hygienic Enclosure and the Waste Commons. O’Hare profiles waste pickers at an enormous landfill in Uruguay that may imminently be enclosed. The book prompts the questions: Who owns trash, and when does trash change ownership?  O’Hare frames garbage as a commons—resources accessible to and managed by the collective for the common good. By viewing garbage as a commons, societies can reconsider our collective sense of responsibility toward waste and the vulnerable communities who process and recycle trash. Forming new shared approaches to trash disposal and recycling could inform equitable and sustainable waste management policies worldwide. Granting municipalities ownership over garbage may be an important step in avoiding mafia infiltration in Albania. But garbage disposal could be regulated alongside implementing legal reforms that reframe garbage as a commons. In addition, the work of Romani and Egyptian recyclers could be legalized so they can enjoy safe conditions while providing a much-needed service that reduces the country’s ecological footprint.  Societies around the world must rethink trash as belonging to all of us, especially to people who are marginalized and impoverished. I believe the way to do this is by rendering visible the honest labor of waste pickers like Bujari and employing an egalitarian view that safeguards these vulnerable people and their livelihoods. Source of the article

GOATReads:Politics

The Long Undoing of Iran–Israel Relations

Behind the hostility between the two former allies lies an old, improbable intimacy Centuries before the state of Israel appeared on a map, before Zionism or kibbutzim or the Balfour Declaration, the Jews had been exiled in Babylon. The year was 539 BCE when Cyrus the Great, founder of the Achaemenid Empire, conquered Babylon and issued a decree allowing the Jews to return to Jerusalem and rebuild their temple. This act, recorded in the Hebrew Bible in the Book of Ezra, was so transformative that Cyrus is the only non-Jew ever referred to as “Messiah” (anointed one) in the Jewish canon. Jewish texts praise him as a liberator chosen by God. Jewish communities across the Persian Empire flourished under Persian protection, and Persian kings—Cyrus, Darius, and Artaxerxes—are remembered favourably in Jewish tradition for their tolerance and patronage. This set the tone for centuries of relatively harmonious Jewish life under Persian rule, from the Achaemenids to the Sassanids. The Book of Esther, set in the Persian capital of Susa, unfolds under the reign of Ahasuerus (widely identified with Xerxes I), where a Jewish woman becomes queen and thwarts a genocidal plot against her people. That story gave rise to Purim, a festival of survival and salvation, still celebrated today. Its backdrop is unmistakably Persian, and its villains and heroes are woven into the shared lore of both peoples. Later, under Sassanian rule (224 to 651 CE), the Jewish academies of Babylonia—Sura and Pumbedita—thrived, compiling what became the Babylonian Talmud, one of the cornerstones of rabbinic Judaism. Persian kings granted the Jewish community internal autonomy, allowing a high degree of self-governance under the Exilarchs, the political heads of Babylonian Jewry who traced their lineage to King David. This historical intimacy of protection and patronage stands in stark contrast to the Roman destruction of the Second Temple in 70 CE or the later inquisitions and expulsions Jews faced in Europe. In the Persian world, Jews were often considered “People of the Book”, protected by imperial law, engaged in commerce, medicine, and administration. Even in the modern era, under the Qajar and early Pahlavi dynasties, Iran’s Jewish community retained a measure of legal recognition and cultural integration, despite periodic pressures. What this long arc suggests is that the post-1979 enmity between Iran and Israel is a radical rupture, not the culmination of an ancient antagonism. It reveals just how thoroughly modern ideologies can override historical memory. The Islamic Republic’s anti-Zionism deliberately broke with this older Persian–Jewish familiarity. It cast Israel not as a regional peer or co-heir to monotheism, but as a colonial implant and a theological affront. In Jerusalem, a street near the city centre bears Cyrus’s name—Rehov Koresh—reminding passersby that this Persian king, not born of Israel, once delivered its people home. And though Israel’s embassies in Tehran are long gone, when they stood, they carried an unspoken echo of that ancient gratitude. But the twentieth century told a different story. After the birth of Israel in 1948, the Arab world snapped its teeth. Boycotts, embargoes, and belligerence followed. Yet Iran, then ruled by Mohammad Reza Shah Pahlavi—the peacock monarch in Savile Row suits and tinted aviators—never joined the chorus. In 1950, just two years after the founding of Israel, Iran became the second Muslim-majority nation after Turkey to recognise the Jewish state, albeit with minimal fanfare. The Iranian embassy in Tel Aviv operated behind a diplomatic smokescreen, officially an “Iranian Interests Section” attached to the Swiss embassy, but in reality, a fully functioning mission. Thus began a decades-long courtship: cautious, covert, but remarkably close. For thirty years, the two countries collaborated in a symbiotic embrace, each offering what the other lacked. Israel needed oil, access, and intelligence corridors. Iran wanted arms, training, and a hedge against the pan-Arab nationalism that threatened to engulf the region. The relationship operated at multiple levels. Israeli engineers helped construct radar installations near the Caspian Sea and trained Iranian pilots and paratroopers. Mossad and SAVAK—the Shah’s feared internal security force—ran joint operations in Iraqi Kurdistan, supporting Kurdish rebels as a means to destabilise Saddam Hussein’s Iraq. The Israeli company Elta helped design Iran’s surveillance architecture. A secret oil pipeline—the Eilat–Ashkelon line—allowed Iranian crude to flow to Europe via Israel, bypassing the Suez Canal. Iran bought a 50 percent stake in the pipeline. Diplomatic visits occurred in the shadows. Golda Meir met the Shah privately in 1972, a meeting so secret it was kept from several cabinet members. Yitzhak Rabin was once spirited into the Niavaran Palace past bewildered stewards, his name absent from any official ledger. Israeli agricultural experts travelled to Iran under commercial visas. Military officers wore civilian clothes. There were conferences on irrigation and desert farming in Tehran where Hebrew was spoken behind closed doors. The bond was not only strategic but deeply personal for some leaders. The Shah admired Israel’s military efficiency and nation-building zeal. Prime Minister David Ben-Gurion, for his part, believed that the periphery strategy—aligning with non-Arab powers like Iran, Turkey, and Ethiopia—was the key to Israel’s survival. This clandestine intimacy continued right up to the eve of the Iranian Revolution, when the tapestries of diplomacy were hastily ripped down. But for a time, Israel and Iran were not enemies, not even distant allies—they were, in Ben-Gurion’s words, partners. In 1961, Ben-Gurion paid a clandestine visit to Tehran. Meeting Iranian Prime Minister Ali Amini, he described the hush-hush nature of their friendship, reportedly saying: “Our relations are like a true love between people without their getting married. It’s preferable that way.” In other words, the two states were “in love” diplomatically, but kept it unofficial to avoid public scrutiny. Later, in 1972, Golda Meir expressed a desire to visit Iran. According to the diaries of Asadollah Alam, the Shah’s close minister of court, the Shah agreed to meet Meir on the condition the encounter be kept utterly secret. Meir did make a covert trip to Tehran on 18 May 1972, and the Shah, impressed by the then 74-year-old prime minister, later marvelled to his courtier, “That old woman has such stamina”. Meir herself joked afterward with her defence minister about “my affair with the Shah”. The two countries would soon fall out of love. In 1979, a new face of Iran stepped out of exile and into history. Ayatollah Ruhollah Khomeini flew back from Paris, black-robed and remote, a theologian with the gait of vengeance. The Shah fled. The revolution opened its jaws. Within days, Israel was out: its embassy shuttered, its diplomats expelled, the building handed over to the Palestine Liberation Organization. Where a Star of David once hung, a new banner unfurled—Palestine as Iran’s moral axis, Zionism as sin. Khomeini did not bother with euphemisms. He called Israel a “cancer”, “an imposter”, “a knife in the heart of Islam”. It was not just politics. It was theological inversion. The Islamic Republic’s new identity demanded old affections be recast as betrayal. If Cyrus had been the anointed, modern Israel was apostasy. Iran needed enemies. The United States became the Great Satan. Israel, the Little. And yet, because history is never quite done with irony, the lovers spoke again. The 1980s brought war. Saddam Hussein’s Iraq invaded Iran. The West, spooked by Khomeini’s rhetoric, embargoed arms. Iran bled. Its jets needed parts. Its soldiers needed bullets. And Israel, enemy of the revolution, obliged. Through clandestine deals and whispered intermediaries, Israel sold Iran over 500 million dollars worth of arms. In 1987, Israeli Defence Minister Yitzhak Rabin said the quiet part aloud: “Iran is our best friend. We do not intend to change our position. The regime will not last.” This was the Iran–Contra scandal, that morally kinked triangle where the United States, Israel and Iran traded guns for hostages for guerrillas. Everyone lied. Everyone denied. But there it was again: a marriage, this time of convenience, conducted in the long shadow of ideology. It did not survive. By the late 1980s, Iran had learned to fight on its own. Its revolutionary identity solidified like cooling iron. It found new proxies—Hezbollah in Lebanon, Islamic Jihad in Palestine—and poured money into them. “Death to Israel” became not just a slogan but a rhythm, chanted on Quds Day, broadcast between sermons. Israel, in turn, reoriented. The periphery doctrine that had once embraced Tehran now looked toward the Gulf. It deepened ties with Egypt and Jordan, flirted with the United Arab Emirates, and in 2020, married into the Abraham Accords. The reverse periphery had arrived. Iran, once friend, became nemesis. And Turkey, once the original ally, drifted. Under Mustafa Kemal Atatürk, it had recognised Israel in 1949. Under President Recep Tayyip Erdogan, it alternated between rhetorical fury and pragmatic warmth. The three old partners—Israel, Iran, and Turkey—now circled each other like bitter cousins at a broken family meal. Today, Tehran and Tel Aviv speak only through drones and assassinations. Iranian nuclear scientists die in car bombs. Israeli embassies go on high alert. Hezbollah stockpiles missiles; Mossad responds with silence and sabotage. There is no love left. And yet, amid this asymmetrical hostility, there remains the quiet irony of the nuclear ledger. Israel, never a signatory to the Treaty on the Non-Proliferation of Nuclear Weapons (NPT), is widely understood to possess an undeclared arsenal of nuclear warheads—estimates range from 80 to 200—housed in a doctrine of deliberate opacity. It neither confirms nor denies their existence, a posture known as “nuclear ambiguity”. Iran, by contrast, is a signatory to the NPT and has long insisted its nuclear programme is civilian in nature. And yet, in the eyes of Israel and much of the West, it is Iran that remains the greater threat—its enrichment levels scrutinised, its facilities surveilled, its scientists assassinated. In recent years, the covert cold war between Israel and Iran has turned overt. In 2020, Mohsen Fakhrizadeh, the father of Iran’s nuclear programme, was assassinated outside Tehran in an operation widely attributed to Israel. In 2022, Israeli agents reportedly kidnapped and interrogated Iranian operatives inside Iran. In April 2024, Iran launched a direct drone-and-missile strike on Israeli territory, claiming retaliation for the killing of a senior Islamic Revolutionary Guard Corps commander in Damascus. Though most projectiles were intercepted, it marked the first open Iranian attack on Israeli soil. The rules of proxy had begun to fray. And then, on 13 June 2025, Israel struck Iran directly in Operation Rising Lion—nearly 200 fighter jets hitting nuclear sites, missile bases, and the homes of top generals. Tehran’s response was swift: over 100 drones launched toward Israel. It is hard to believe that the enmity between Israel and Iran is a modern invention. Its vocabulary was forged in the furnace of ideology and hardened by decades of war. Two nations, once joined by myth and mutual need, now face each other as the deadliest of adversaries. Source of the article

Inside the bizarre race to secure Earth’s nuclear tombs

With nuclear energy production increasing globally, the problem of what to do with the waste demands a solution. But where do you store something that stays dangerous for thousands of years? Uniformed guards with holstered guns stand at the entrance and watch you lumber past. Ahead lies a wasteland of barren metal gantries, dormant chimney stacks and abandoned equipment. You trudge towards the ruins of a large, derelict red-brick building. Your white hazmat suit and heavy steel-toe-capped boots make it difficult to walk. Your hands are encased in a double layer of gloves, your face protected by a particulate-filtering breathing mask. Not an inch of flesh is left exposed. Peering into the building’s gloomy interior, the beam from your head torch picks out machinery and vats turned orange with rust. On a wall nearby, a yellow warning sign featuring a black circle flanked by three black blades reminds you of the danger lurking inside. Apart from the sound of your own breathing behind your mask, the only thing you can hear is the crackling popcorn of your Geiger counter. This is what entering the Prydniprovsky Chemical Plant is like for nuclear researchers, including Tom Scott, professor of materials at the University of Bristol and head of the UK Government’s Nuclear Threat Reduction Network. Prydniprovsky was once a large Soviet materials and chemicals processing site on the outskirts of Kamianske in central Ukraine. Between 1948 and 1991, it processed uranium and thorium ore into concentrate, generating tens of millions of tonnes of low-level radioactive waste. When the Soviet Union dissolved, Prydniprovsky was abandoned and fell into disrepair. “The buildings are impressively awful and not for the faint-hearted,” says Scott. “As well as physical hazards, such as gaping holes in the floor, there’s no light or power. And obviously there are radiological hazards. Until very recently, the Ukrainian Government didn’t have a clue what had gone on at the site, so there were concerns about the high radiation levels and ground contamination.” When radiation levels are deemed too high for humans, Scott sends in the robots. At Prydniprovsky, it was a robotic dog, nicknamed ‘Spot’, that had been developed by Boston Dynamics. Spot, customised with radiation sensors, was wearing rubber socks – the sort you use to prevent your verruca spreading when you go swimming, but on this occasion, worn to prevent any radioactive material getting stuck on its feet. Once activated, Spot trotted off into the building to explore further, using a light detection and ranging (LIDAR) system to create a 3D image of the environment and pinpoint any radioactivity in the area. Scott and his team are known as industrial nuclear archaeologists, and they’re working to find, characterise and quantify the ‘legacy’ radioactive waste at sites around the world. “High-level radioactive waste gives off a significant amount of radioactivity, sufficient to make humans sick if they get too close,” he says. “Some of this waste will be dangerously radioactive for very long periods of time, meaning that it needs to be physically kept away from people and the environment to ensure that no harm is caused.” But finding legacy waste like this, which has been amassing since the 1940s, is only part of the challenge. Once it’s been found, it has to be isolated and stored long enough for it to no longer pose a threat. And that’s not easy. “Currently we’re storing our high-level wastes above ground in secure, shielded facilities,” Scott says. “Such facilities need to be replaced every so often because buildings and concrete structures can’t last indefinitely.” Safely storing the nuclear waste that already exists is only the start of the problem, however. With the world moving away from fossil fuels towards low-carbon alternatives, nuclear energy production is set to increase, which means more waste is going to be produced – a lot more. Currently, nuclear energy provides roughly nine per cent of global electricity from about 440 power reactors. By 2125, however, the UK alone is predicted to have 4.77 million m3 (168 million ft3) of packaged radioactive waste. That’s enough to fill 1,900 Olympic swimming pools. Hence, the world needs more safe storage sites for both legacy and new nuclear waste. And it needs them fast. Safe spaces In the UK, most nuclear waste is currently sent to Sellafield, a sprawling site in Cumbria, in the north-west of England, with about 11,000 employees, its own road and railway network, a special laundry service for contaminated clothes and a dedicated, armed police force (the Civil Nuclear Constabulary). Sellafield processes and stores more radioactive waste than anywhere in the world. But more hazardous material is on the way, much of which will come from the new nuclear power station being built at Hinckley Point in Somerset. To keep pace, experts have been hunting for other, much stranger, disposal solutions. It’s a challenge for nuclear agencies all around the world. All sorts of proposals have been put forward, including some bizarre ideas like firing nuclear waste into space. (The potential risk of a launch failure showering the planet with nuclear debris has silenced that proposal’s supporters.) So far, the most plausible solution is putting the waste in special containers and storing them 200–1,000m (660–3,280ft) underground in geological disposal facilities (GDFs). Eventually, these GDFs would be closed and sealed shut to avoid any human intrusion. These ‘nuclear tombs’ are the safest, most secure option for the long-term and minimise the burden on future generations. “In the UK, around 90 per cent of the volume of our legacy waste can be disposed of at surface facilities, but there’s about 10 per cent that we don’t currently have a disposal facility for. The solution is internationally accepted as being GDFs,” says Dr Robert Winsley, design authority lead at the UK’s Nuclear Waste Services. “We estimate that about 90 per cent of the radioactive material in our inventory will decay in the first 1,000 years or so. But a portion of that inventory will remain hazardous for much longer – tens of thousands, even hundreds of thousands of years. "GDFs use engineered barriers to work alongside the natural barrier of stable rock. This multi-barrier approach isolates and contains waste, ensuring no radioactivity ever comes back to the surface in levels that could do harm.” But how do you keep that radioactivity in the ground? Radioactive waste is typically classified as either low-, intermediate- or high-level waste. Before being disposed of deep underground, high-level waste is converted into glass (a process known as vitrification) and then packed in metal containers made of copper or carbon steel. Intermediate-level waste is typically packaged in stainless-steel or concrete containers, which are then placed in stable rock and surrounded by clay, cement or crushed rock. The process isn’t set in stone yet, though. Other materials, such as titanium- and nickel-based alloys, are being considered for the containers due to their resistance to corrosion. Meanwhile, scientists in Canada have developed ultra-thin copper cladding that would allow them to produce containers that take up less space, while providing the same level of protection. Rock solid The hunt is also on to find facilities with bedrock that can withstand events such as wars and natural disasters (‘short-term challenges’, geologically speaking). Sites that won’t change dramatically over the millennia needed for nuclear waste to no longer pose a risk. “A misconception is that we’re looking for an environment that doesn’t change, but the reality is the planet does change, very slowly,” says Stuart Haszeldine, professor of carbon capture and storage at the University of Edinburgh. “Our generation must find a way to bury the waste very deep to avoid radioactive pollution or exposure to people and animals up to one million years into the future.” To achieve this, the site ideally needs to be below sea level. If it’s above sea level, rainwater seeping down through fractures in the rock around the site might become radioactive and eventually find its way to the sea. When this radioactive freshwater meets the denser saltwater, it’ll float upwards, posing a risk to anything in the water above. Another challenge is predicting future glaciations, which happen roughly once every 100,000 years. During such a period, the sort of glaciers that cut the valleys in today’s landscape could form again, gouging new troughs in the bedrock that might breach an underground disposal facility. “Accurate and reliable future predictions depend on how well you understand the past,” says Haszeldine. “Typically, repository safety assessments cover a one-million-year timeframe, and regulations require a GDF site to cause fewer than one human death in a million for the next million years. Exploration doesn’t search for a single best site to retain radioactive waste, but one that’s good enough to fulfil these regulations.” Hiding places In 2002, the US approved the construction of a nuclear tomb in an extinct supervolcano in Yucca Mountain, Nevada, about 160km (100 miles) north-west of Las Vegas. Research estimated the chances of a future volcanic eruption disrupting the proposed repository were one in 63 million per year. So, it wasn’t the potential of a radioactive volcanic eruption that prevented the construction of the site going ahead. Instead, opponents cited concerns that it was too close to a fault line and, in 2011, US Congress ended funding for the project. Since then, waste from all US nuclear power plants has been building up in steel and concrete casks on the surface at 93 sites across the country. Other sites have fared better, however. Already this year, construction has begun on a nuclear tomb in Sweden, expected to be ready in the 2030s, but it’s also the year the world’s first tomb – at a site in Finland, called Onkalo (Finnish for ‘cave’ or ‘hollow’) – could open its doors for waste. “While there’s a lot of fractured rock at Onkalo, geologists carefully surveyed the area to work out the water flow,” says Haszeldine. “With little landscape topography, there’s no drive pushing water deep underground and so layers of water haven’t moved for hundreds of thousands of years.” In January 2025, the UK Government announced plans to permanently dispose of its 140 tonnes of radioactive plutonium, currently stored at Sellafield. In a statement, energy minister Michael Shanks cited plans to put it “beyond reach”, deep underground. Three potential sites in England and Wales are being explored by Nuclear Waste Services, and one of Haszeldine’s PhD students is independently investigating a fourth off the Cumbrian coast. The offshore site appears to be hydro-geologically stable (even over glacial timescales), but it would be expensive and difficult to engineer. “Currently, about 75 per cent of the UK’s nuclear waste is already stored across 20 sites,” says Winsley. “People are surprised to hear you’re never far away from the most hazardous radioactive waste, wherever you are in the UK. Our mission is to make this radioactive waste permanently safe, sooner.” Although the construction of excavated tunnels for nuclear tombs is expensive, the volume of waste needing to be buried is actually quite small. As such, a new ‘deep isolation’ approach is also being considered, which adapts the directional-drilling technology used to reach oil and gas reserves. Essentially, it involves drilling horizontal boreholes into a layer of claystone rock, which can absorb some radioactive leakage and self-seal if fractures form. Disposal canisters containing spent fuel rods from nuclear reactors would go into these boreholes. It’s potentially a simpler solution and doesn’t require anyone to excavate an entire network of large tunnels and chambers through different layers of rock deep underground. The deep isolation approach costs less than a third of what it costs to construct a nuclear tomb and uses smaller sites, but the canisters are harder to recover if anything goes wrong. Nevertheless, it’s a viable option for smaller nuclear countries and a second prototype is expected to undergo field testing at a deep borehole demonstration site in the UK in early 2025. Locked in When you think of radioactive waste, you probably imagine glowing rods or oil drums filled with green ooze and covered in warning symbols. In fact, plutonium oxide (a byproduct of nuclear reactors) is stored as a powder that changes colour depending on the chemical composition. But researchers are investigating ways to change its chemical and physical form to make it ready for long-term disposal. At the University of Sheffield, Dr Lewis Blackburn and his team are developing special ceramic materials in which to trap plutonium. Replacing atoms in the tightly ordered structure of ceramic with atoms of plutonium ‘locks in’ the radioactive particles. Think of it like a chain-link fence made of strong, tightly woven metal wires: the researchers are trying to swap out some of those wires with the dangerous radioactive particles to trap them inside the still-strong structure. The scientists are trying to engineer synthetic versions of ancient natural minerals to use as the ‘wires’ in these ceramic prisons – minerals like zirconolite and pyrochlore, left over from Earth’s formation. For billions of years, these minerals have been exposed to the environment, subjected to natural weathering and exposed to water, microbial activity and temperature changes – so the researchers know they’re made of strong stuff. To test whether their synthetic versions are equally durable and resistant to corrosion, the scientists fire high-energy ion beams at them for hours (to simulate radiation damage) while simultaneously exposing them to low-strength acid. “These tests build a picture of how we think these materials will behave over a very long timescale,” says Blackburn. “The half-life of plutonium 239 is about 24,100 years, but the requirement is to keep a ceramic in that state for up to a million years. Essentially, we’re trying to design materials that’ll last forever. I don’t think humans will be around in a million years’ time, so the work we do needs to outlast humanity.” Hide and seek But even after you’ve found a suitable site and buried the radioactive material safely inside it, you still need to warn future generations about what’s hidden inside. The trouble is, even if humans are still around in a million years’ time, there’s no guarantee the languages our ancestors speak, or the symbols they use, will be anything like those of today. In Japan, 1,000-year-old ‘tsunami stones’, which warned future generations to find high ground after earthquakes, have failed to prevent construction on vulnerable sites. Even the radiation symbol we use today (that black circle flanked by black blades on a yellow background) isn’t universally recognised. Research by the International Atomic Energy Agency found that only six per cent of the global population know what it signifies. That’s why scientists have been working with everyone from artists to anthropologists, librarians to linguists, and sculptors to science-fiction writers – to come up with other ways of warning future generations about nuclear tombs. Before the plans for the site at Yucca Mountain were abandoned, suggestions included libraries, time capsules and physical markers, including spikes in the ground. At Onkalo, as well as spikes, the panel has suggested a huge slab of black granite that would be heated to impassably hot temperatures by the Sun. More outlandish ideas have included linguist Thomas Sebeok’s proposal of an ‘atomic priesthood’ that would pass on nuclear folklore (in much the same way that generations of clergy have been relaying the tenets of their respective faiths for thousands of years). But why rely on people? The idea of so-called ‘ray cats’ has also been put forward – that is, genetically engineered creatures that would somehow change colour (or glow if bioluminescence could be harnessed) when exposed to radiation. Perhaps not as fear inducing as, say, a fire-breathing dragon – but if a glowing cat crossed your descendants’ path, it would probably make them think twice about progressing any further. “Some experts think the safest thing we can do is forget about the existence of the repositories altogether and not leave any markers that might entice intrigued ‘treasure hunters’,” says postdoctoral researcher Thomas Keating, from Linköping University, in Sweden. “So far, every attempt to warn people against entering a crypt has failed. Ancient Egyptian tombs are one example of where messages of danger have been wilfully or accidentally ignored by subsequent generations. Communicating the memory of nuclear repositories is a unique problem – no one has pulled off anything like this before.” While some back this active forgetting of future nuclear tombs, researchers like Scott are still trying to get everyone to remember the nuclear sites we’ve already forgotten. It’s like a game of nuclear ‘hide and seek’ – but the stakes are high, and there’s no room for error. Thinking back to his time at the Prydniprovsky Chemical Plant in Kamianske, Scott remembers the hunt for radioactive waste coming to an end. The robotic dog Spot returned from its foray in the darkness, and its rubber socks needed to be peeled off – carefully – and disposed of safely. Like the world’s increasing stockpiles of nuclear waste, they needed a home, fast. Currently, nuclear tombs are our best bet, but it’s a burden humanity must shoulder for thousands of years, long after the benefits gained from nuclear technology will have faded. “My personal opinion is, I don’t think we should allow future generations to forget about a geological disposal facility,” says Scott. “The material is both dangerous and, in longer timescales, potentially valuable. People need to be reminded of its presence.” Source of the article

GOATReads: Psychology

Here’s Why You Should Pause Before Replying to That Email

Quick replies are often driven by an unconscious need to feel safe, seen, or in control. In a world where everything is instant, it’s hard to pause before doing anything. When you wait too long, a real sense of urgency can take over. Your thoughts can run rampant when a message is particularly triggering. Often, however, the instinct to react—quickly, defensively, sometimes harshly—comes from an unconscious need to feel safe, seen, or in control. Replies driven by those instincts are often regrettable.   Pausing is hard, especially in a culture that values outcomes above all else. In a world where speed is mistaken for competence and certainty for strength, stillness can feel intolerable. It can feel like you’re failing if you’re not constantly fixing, moving, and making decisions. The irony is that much of the damage in leadership is done in those impulsive moments. The fix is to turn inward.   Questions to ask yourself  I often get asked, “What are the steps?” “What is the formula?” “How do I do this?!” The truth is, pausing is less about applying an external technique and more about cultivating internal spaciousness. One way to do that is to listen to what is happening at the moment. I always ask my clients three questions that can apply to almost every situation:   What am I not saying that needs to be said?   What am I saying that’s not being heard?   What’s being said that I’m not hearing?   That last question is especially vital. When you pause long enough to truly listen—beneath the words, beneath your defenses—you start to hear what’s being communicated beyond the surface. You hear the fear behind the anger and the longing behind the silence. Then, you start to respond, not just react. That’s the fundamental difference between a reply that invites conversation and one that leaves the recipient feeling puzzled, upset, or conflicted. It’s normal to lurch toward action to soothe your discomfort, but not because it’s the wisest move.  What you really want to say   I encourage my clients to slow down and turn inward before clicking send and ask not just why you react, but what is being touched on you when you do. When you feel the urge to react, ask:  What is really happening to me right now?   What old fear might be stirring up?   Am I serving an unmet need, or is this really how I want to respond?   The truth is, pausing takes practice. It’s not a trick but a discipline of presence. Like any discipline, it gets easier with repetition and support. In coaching, I often invite leaders to notice the physical sensations that arise right before they speak or hit send. That tightness in the chest and that clenched jaw—it’s the body saying: Wait. Even a breath—just one—can interrupt the cycle.   Silence speaks   It’s common to mistake motion for meaning and visibility for effectiveness. But some of the most powerful acts of leadership come not from doing, but from pausing. The old Buddhist bumper sticker gets it right: “Don’t just do something. Sit there.”   It’s in the pause that you connect with your deeper knowing—the part of you that can discern signal from noise and fear from clarity. When you resist the compulsion to react, you create conditions for something truer to emerge. Not from ego, but from presence. Not from fear, but from integrity.   At first, pausing might seem like self-betrayal. Over time, however, you will learn that pausing isn’t a weakness. It is power grounded in awareness, and in that space, leadership matures. Source of the article