CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Posts

Why Your Company Needs a Chief Data, Analytics, and AI Officer

How is AI and data leadership at large organizations being transformed by the accelerating pace of AI adoption? Do these leaders’ mandates need to change? And should overseeing AI and data be viewed as a business or a technology role? Boards, business leaders, and technology leaders are asking these questions with increasing urgency as they’re being asked to transform almost all business processes and practices with AI. Unfortunately, they’re not easy questions to answer. In a survey that we published earlier this year, 89% of respondents said that AI is likely to be the most transformative technology in a generation. But experience shows that companies are still struggling to create value with it. Figuring out how to lead in this new era is essential. We have had a front row seat over the past three decades to how data, analytics, and now, AI, can transform businesses. As a Chief Data and Analytics Officer with AI responsibility for two Fortune 150 companies, as an author of groundbreaking books on competing with analytics and AI in business, and as a participant and advisor on data, analytics, and AI leadership to Fortune 1000 companies, we regularly counsel leading organizations on how they must structure their executive leadership to achieve the maximum business benefit possible from these tools. So, based on our collective first-hand experience, our research and survey data, and our advisory roles with these organizations, we can state with confidence that it almost always makes the most sense to have a single leader responsible for data, analytics, and AI. While many organizations currently have several C-level tech executives, we believe that a proliferation of roles is unnecessary and ultimately unproductive. Our view is that a combined role—what we call the CDAIO (Chief Data, Analytics, and AI Officer)—will best prepare organizations as they plan for AI going forward. Here is how the CDAIO role will succeed. CDAIOs Must be Evangelists and Realists Before the 2008–09 financial crisis, data and analytics were widely seen as back-office functions, often relegated to the sidelines of corporate decision making. The crisis was a wakeup call to the absolute need for reliable data, the lack of which was seen by many as a precipitating factor of the financial crisis. In its wake, data and analytics became a C-suite function. Initially formed as a defensive function focused on risk and compliance, the Chief Data Officer (CDO) has evolved in the years since its establishment, as a growing number of firms repositioned these roles as Chief Data and Analytics Officers (CDAOs). Organizations that expanded the CDAO mandate saw an opportunity to move beyond traditional risk and compliance safeguards to focus on offense-related activities intending to use data and analytics as a tool for business growth. Once again, the role seems to be undergoing rapid change, according to forthcoming data of an annual survey that one of us (Bean) has conducted since 2012. With the rapid proliferation of AI, 53% of companies report having appointed a Chief AI Officer (or equivalent), believe that one is needed, or are expanding the CDO/CDAO mandate to include AI. AI is also leading to a greater focus and investment in data, according to 93% of respondents. These periods of evolution can be confusing to both CDAIOs and their broader organizations. Responsibilities, reporting relationships, priorities, and demands can change rapidly—as can the skills needed to do the job right. In this particular case, the massive surge in interest in AI has driven organizations to invest heavily in piloting various AI concepts. (Perhaps too frequently.) These AI initiatives have grown rapidly—and often without coordination—and leaders have been asked to orchestrate AI strategy, training data, governance, and execution across the enterprise. To address the challenges of this particular era, we believe that companies should think of the CDAIO as both evangelist and realist—a visionary storyteller who inspires the organization, a disciplined operator who focuses on projects that create value for the company while terminating those that do not deliver a return, and a strategist who deeply understands the AI technology landscape. At the core of these efforts—and essential to success in a CDAIO position—is ensuring that investments in AI and data deliver measurable business value. This has been a common stumbling block of this kind of role. The failure of data initiatives to create commensurate business value has likely contributed to the short tenures of data leaders (to say nothing of doubts about the future of the role entirely). CDAIOs need to focus on business value from day one. To Make Sure AI and Data Investments Pay Off, CDAIOs Need a Clear Mandate In most mid-to-large enterprises, data and AI touch revenue, cost, product differentiation, and risk. If trends continue, the coming decade will see systematic embedding of AI into products, processes, and customer interactions. The role of the CDAIO is to act as orchestrator of enterprise value while managing emerging risks. A single leader with a clear business mandate and close relationships with key stakeholders is essential to lead this transformation. Based on successful AI transformations we’ve observed, organizations today must entrust their CDAIOs with a mandate that includes the following: · Owning the AI strategy. To bring about any AI-enabled transformation, a single organizational leader must define the company’s “AI thesis”—how AI creates value—along with the corresponding roadmap and ROI hypothesis. The strategy needs to be sold to and endorsed by the senior executive team and the board. · Preparing for a new class of risks. AI introduces safety, privacy, IP, and regulatory risks that require unified governance beyond traditional policies. CDAIOs should normally partner with Chief Compliance or Legal Officers to manage this mandate. · Developing the AI technology stack for the company. Fragmentation and inconsistent management of tools and technology can add expense and reduce the likelihood of successful use case development. CDAIOs need the power to follow through on their vision for the adoption and development of tools and technologies that are right for the organization, providing secure “AI platforms as products” that teams can use with minimal friction. · Ensuring the company’s data is ready for AI. This is particularly critical for generative AI, which primarily uses unstructured data such as text and images. Most companies have focused only on structured numerical data in the recent past. The data quality approaches for unstructured data are both critical to success with generative AI and quite different from those for structured data. · Creating an AI-ready culture. Companies with the best AI tech might not be the long-term winners; the race will be won by those with a culture of AI adoption and effective use that maximizes value creation. CDAIOs should in most cases partner with CHROs to accomplish this objective. · Developing internal talent and external partner ecosystems. It’s essential to develop a strong talent pipeline by recruiting externally as well as upskilling internal talent. This requires building strategic alliances with technology partners and academic institutions to accelerate innovation and implementation. Generating significant ROI for the company. At the end of the day, CDAIOs need to drive measurable business outcomes—such as revenue growth, operational efficiency, and innovation velocity—by prioritizing AI initiatives tied to clear financial and strategic KPIs. They serve as the bridge between experimentation and enterprise-scale value creation. Positioning CDAIOs for Organizational Success As important as what CDAIOs are being empowered to do, is how they’re positioned in an organization to do it. Companies are adopting different models for where the CDAIO reports within the organization. While some CDAIOs report into the IT organization, others report directly to the CEO or to business area leaders. At its core, the primary role of the CDAIO is to drive business value through data, analytics, and AI, owning responsibility for business outcomes such as revenue lift and cost reduction. While AI technology enablement is a key part of the role, it is only one component of CDAIO’s broader mandate of value creation. Given the emphasis on business value creation, we believe that in most cases CDAIOs should be positioned closer to business functions than to technology operations. Early evidence suggests that only a small fraction of organizations report positive P&L impact from gen AI, a fact that underscores the need for business-first AI leadership. While we have seen successful examples of CDAIOs reporting into a technology function, this is only when the leader of that function (typically a “supertech” Chief Information Officer) is focused on technology-enabled business transformation. Today, we are witnessing a sustained trend of AI and data leadership roles reporting into business leaders. According to forthcoming survey data from this year, 42% of leading organizations report that their AI and data leadership reports to business or transformation leadership, with 33% reporting to the company’s president or Chief Operating Officer. Data, analytics, and AI are no longer back-office functions. Leading organizations like JPMorgan have made the CDAIO function part of the company’s 14-member operating committee. We see this as a direction for other organizations to follow. Whatever the reporting relationship for CDAIOs, their bosses often don’t fully understand this relatively new role and what to expect of it. To ensure success of the CDAIO role, executives to whom a CDAIO reports should maintain a checklist of the organization’s AI ambitions and CDAIO mandate. Key questions include: Do I have a single accountable leader for AI value, technology, data, risk and talent? Are AI and data roadmaps funded sufficiently against business outcomes? Are our AI risk and ethics guardrails strong enough move ahead quickly? Are we measuring AI KPIs quarterly at minimum and pivoting as needed? Are we creating measurable and sustainable value and competitive advantage with AI? The Future of AI and Data Leadership Is Here Surveys about the early CDO role reveal a consistent challenge—expectations were often unclear, and ROI was hard to demonstrate with a mission focused solely on foundational data investments. Data and AI are complementary resources. AI provides a powerful channel to show the value of data investments, but success with AI requires strong data foundations—structured data for analytical AI, and unstructured data for generative AI. Attaching data programs to AI initiatives allows demonstration of value for both, and structurally this favors a CDAIO role. The data charter (governance, platform, quality, architecture, privacy) becomes a data and platforms component within the CDAIO’s remit. Benefits include fewer hand-offs, faster decision cycles and clearer accountability. To turn AI from experiment to enterprise muscle, organizations must establish a CDAIO role with business, cultural, and technology transformation mandates. We believe strongly that the CDAIO will not be a transitional role. CEOs and other senior executives must ensure that CDAIOs are positioned for success, with resources and organizational design that supports the business, cultural, and technology mandate of the CDAIO. The demand and need for strong AI and data leadership will be essential if firms expect to compete successfully in an AI future which is arriving sooner than anyone anticipated.  Source of the article

Medical Students Are Learning Anatomy From Digital Cadavers. Can Technology Ever Replace Real Human Bodies?

From interactive diagrams to A.I. assistants, virtual tools are beginning to supplant physical dissections in some classrooms A human chest as large as a room fills your entire field of view. With a few commands, you shrink it down until it’s a mere speck. Then, you return it to life-size and lay it prone, where you proceed to strip off the skin and layers of viscera and muscle. Helpful text hovers in the air, explaining what you see, projected across your field of vision by a headset. This futuristic experience is becoming more commonplace in medical schools across the country, as instructors adopt virtual reality tools and other digital technologies to teach human anatomy. On dissection tables in some classrooms, where students might have once gathered around human cadavers, digitized reconstructions of the human body appear on screens, allowing students to parse the layers of bones and tendons, watch muscles contract, and navigate to specific anatomical features. Sandra Brown, a professor of occupational therapy at Jacksonville University in Florida, teaches her introductory anatomy class with exclusively digital cadavers. “In a way, the dissection is brought to life,” she says. “It’s a very visual way for [students] to learn. And they love it.” The dissection of real human cadavers has long been a cornerstone of medical education. Dissection reveals not only the form of the organs, but how the structures of the human body work together as a whole system. The best way to understand the human body, researchers have argued, is to get up close and personal with one. But human dissection has also been controversial for hundreds of years, with a history burdened by grave robbers and unscrupulous physicians. Now, with interactive diagrams, artificial intelligence assistants and virtual reality experiences, new technology might provide an effective alternative for students—no bodies necessary. Still, the shift toward these tools raises questions around what might be lost when real bodies leave the classroom—and whether dissecting a human body carries lessons that no digital substitute can teach. “Is it helpful to be exposed to death, and is there something beyond just the functional learning of dissecting a cadaver?” says Ezra Feder, a second-year medical student at the Icahn School of Medicine at Mount Sinai in New York. “I don’t really have a great answer for that.” “A new dimension of interaction” Among the most popular new additions to anatomy classrooms are digital cadaver “tables.” These giant, iPad-like screens can be wheeled into the classroom or the lab. Anatomage, a California-based company that produces one such table, has seen its product adopted by more than 4,000 health care and education institutions. The company uses real human cadavers that have been frozen and imaged in thousands of thin sheets, then reconstructs them digitally, so students can repeatedly practice differentiating layers and systems of the body. Digital cadavers are not new, but they’re getting better, more realistic and more interactive. They’re so good that some schools have phased out real human cadavers entirely. Brown, who uses the Anatomage table, says digital dissection meets the educational styles preferred by her students. “They’ve had smartphones in their hands since they were born, practically. So, the fact that we have this massive virtual technology that they can use, and they can actually start to incorporate all the skills they have into learning—it was just a no-brainer for me,” she says. “It’s really fun.” Brown’s students can rotate, move and manipulate the digital cadavers in ways that would be impossible with a real body. “They literally have the brain upside down, and they’re looking at it from underneath. You can’t really do a lot of that when you have a cadaver in front of you, because they’re so fragile,” she says. “It’s an errorless way for [students] to explore, because if they make a mistake, or they can’t find something, they can reset it, and they can undo it.” Other companies, like Surglasses, which developed the Asclepius AI Table, are taking the digital cadaver model one step further. This table features A.I. assistants with human avatars that can listen and respond to voice commands from students and educators. The assistants can pull up relevant images on the table and quiz students on what they’ve learned. Recent research has shown that A.I. assistants can effectively support student learning and that those with avatars are particularly promising. “Students really respond well to technology that’s accessible to them,” says Saeed Juggan, a graduate teaching assistant at Yale Medical School, which has its own suite of digital anatomy tools, including a 3D model of a body that students can access from their own devices. Still, Juggan is a bit wary of A.I. tools because of potential limitations with the data they’re trained on. “Suppose students ask a question that’s not answered by those resources. What do you do in that case? And what do you tell the bot to do in that case?” he says. With virtual and augmented reality (VR/AR) anatomy programs, human dissection has become even more futuristic. Companies like Toltech have created VR headsets that transport students into an immersive digital cadaver lab, where they manipulate a detailed, annotated body standing in a gray void. While learning remotely during the Covid-19 pandemic, students at Case Western Reserve University donned bug-like visors to interact with holographic bodies that appeared to be floating in the students’ apartments. Still, VR comes with complications. Some students experience motion sickness from the headsets, explains Kristen Ramirez, a research instructor and content director for anatomy lab at New York University’s Grossman School of Medicine. Her approach, and that of her team at NYU, is to tailor the technology to fit the type and content of instruction. Ramirez and a colleague have created an in-house VR program that allows students to stand inside a human heart. Students can see “everything that the red blood cells would have seen if they had eyes and cognition,” she says. For certain parts of the body, an immersive experience is the best way to understand them, Ramirez adds. The pterygopalatine fossa, for example, is a small space deep inside the face, roughly between the cheek and nose—and the only way to see it, until now, has been by sawing through a donor’s skull. Even then, the fragile structures are inevitably damaged. With VR, students can view that cavity as though they are standing inside it—the “Grand Central Station of the head and neck,” as Ramirez calls it—and access “literally a new dimension of interaction.” The body on the table Even as digital tools land in more medical school classrooms, some say that learning from an actual body is irreplaceable. William Stewart, an associate professor of surgery at Yale University who instructs gross anatomy, sees the embodied experience of dissection as vital. “There’s a view of learning called ‘gestalt,’ which is that learning is the sum of all of the senses as the experience occurs,” he explains. “There’s seeing, there’s touching, there’s camaraderie around the table. There’s—I know this sounds silly, but it’s true—there’s the smell,” Stewart says. “All of those contribute, in one way or other, to the knowledge, and the more and more of those senses you take away, in my view, the less and less you learn.” When it comes to preparing for surgery or getting tactile experience with a body, surveys suggest students generally favor cadaver dissection, with some citing better retention of concepts. Working solely with digitized, color-coded models that can respond to voice commands does students a disservice, Stewart argues. “It’s not seeing it, it’s finding it that makes the knowledge.” The donation of one’s body to scientific learning and research is not taken lightly. It’s common practice for medical students to participate in a memorial service to honor the people who they will dissect as part of their studies. Jai Khurana, a first-year medical student at the Harvard-MIT Health Sciences Technology Program currently taking introductory anatomy, describes a respect and care for the human body that he and his fellow students learn through human dissection. “We regularly stay many hours past our anatomy lab if we don’t think we’re going to finish,” he says. “You still want to finish what you’re doing, do it in a respectful way and learn everything that you can learn.” Still, ethical violations have long plagued human dissection. In the 18th and 19th centuries, medical students dissected corpses stolen from graves and even those that had been murdered and sold by unscrupulous profiteers. At the time, dissection was implemented as an extra punishment for executed criminals in Britain to deprive them of a Christian burial; the boon to medical research was a bonus. Today, some countries around the world and many U.S. states still permit the dissection of “unclaimed remains,” or the bodies of those who die without family to properly bury them, raising concerns about consent. A recent investigation revealed that unclaimed corpses sent to the University of Southern California’s anatomy program were sold to the U.S. Navy, where they were used to train military operatives in the Israel Defense Forces. And despite the fact that most medical school cadavers in the U.S. are willingly donated, the donors and families are sometimes underinformed about what may happen to their remains. No federal agency monitors what happens to bodies donated for research and education. In an extreme case from this year, the former manager of the Harvard morgue pleaded guilty to stealing donated human remains and selling them to retailers for profit. Digital cadavers, VR/AR and A.I.-enhanced anatomy technology could offer a way to skirt these issues by reducing the number of human bodies needed for education—and the cutting-edge tech might actually be less costly than human cadavers for some medical programs. Whatever way you swing it, bodies are expensive, even if they are donated. Supporting a cadaver lab requires administrative staff to coordinate body donations, a wet lab space equipped for dissections and infrastructure for disposing of human remains. Because of this, students typically work in small groups to use fewer bodies. Each human cadaver can be used only once for each procedure, but digital ones can be reset repeatedly. For Brown, who teaches with the Anatomage table, the ideal lab would be a mix of synthetic, digital and real human dissection, where she could supplement a largely digital cadaver-based education with different tools to demonstrate various elements of anatomy. But given the financial constraints at Jacksonville University, Brown does what she can with the Anatomage table, having her students rotate the body’s shoulders, color code structures and create videos of their work to reference later. Her occupational therapy students are not preparing to be surgeons, so they would not have to practice cutting into flesh, she adds. Learning from a human cadaver has long been considered a rite of passage for medical students, who typically must dissect a body during their first-year anatomy class. But the emotional weight of human dissection can sometimes hinder, not enhance, the experience. “Cadavers can be scary, like a dead body laying in front of you that you have to look at,” says Brown. “And I think that [digital dissection] is just a safe way for [students] to explore.” She explains that some students enter the program expecting cadaver dissection and feel more comfortable when they encounter the digital models instead. Perhaps there’s value to sitting with that discomfort. Ramirez says that students who were initially apprehensive to human dissection might never overcome their squeamishness when offered a virtual alternative. “Because they are getting such small moments of interaction with the cadavers, I definitely will still see students even a couple weeks in, you know, disappointed, if you will, that they’re at a cadaver station, hesitant about going and interacting with it,” she says. For Feder, the student at Mount Sinai, dissecting a real human body elicited mixed emotions. In the lab, some students seemed to become desensitized over time to the cadaver’s humanity and treated it inappropriately, he says. “For some people, it became so ordinary and routine that maybe they lost some respect for the body,” Feder adds. “Maybe these are coping mechanisms.” Regarding respect for the dead, he says, “I think I’d feel a lot more comfortable learning from a technology-based cadaver than a real human bone-and-flesh cadaver.” Educationally, the physical cadaver dissection “was really invaluable,” Feder adds, and “a little bit hard to replace.” But he notes that the value of being exposed to death might vary between students, depending on what exactly they’re training for. “Overall, most doctors are in the business of keeping people alive.” The future of learning from cadavers While anatomy classes are increasingly using digitized bodies, cadaver dissection writ large is not likely to disappear anytime soon. It’s still a common way for surgeons to gain tactile experience with manipulating human flesh. Juggan, the Yale graduate student, explains that a neurosurgeon recently practiced a hemispherectomy on cadavers at the university before operating on a living patient. The procedure, which entails surgically separating different parts of the brain, is difficult, with a potential for catastrophic failure. Practicing this surgery on a cadaver is “not necessarily looking at the tissue,” Juggan says. “It’s getting the muscle memory. It’s getting the tactile feel for … this anatomical structure.” It goes without saying: No living patient wants to be the beta test for brain surgery. But not all cadavers are used to prep for such dramatic, high-stakes operations. As Mary Roach observes in her book Stiff: The Curious Lives of Human Cadavers, even plastic surgeons practice nose jobs on disembodied heads from donors. “Perhaps there ought to be a box for people to check or not check on their body donor form: Okay to use me for cosmetic purposes,” she writes in the book. Though the tactile training of surgeons remains important, surgery itself is getting more technologically advanced. With more robot-assisted surgeries, it’s not hard to imagine that technology and anatomy teaching will become more deeply integrated. Take, for instance, laparoscopic surgery, meant to look inside the pelvis or stomach by inserting a tiny camera into the abdomen. The surgery feels almost like science fiction: The surgeon makes minute adjustments from a distance, while the patient’s organs appear on a glowing screen. “If you’re doing laparoscopic surgery, you’re putting three tiny holes in the abdomen, and you’re playing a video game,” Ramirez says. There’s a conceivable future where the majority of medical students do not dissect an actual human body. The problem of tactile experience might also soon be solved by innovation, as synthetic cadavers—made of thermoplastic and organosilicate—mimic the physicality of the human body without limitations of ethics or decomposition. The anatomy classroom may soon be filled with digital dead people and synthetic approximations of flesh, rather than a decaying memento mori. Death’s banishment is, after all, in service of keeping more people alive longer. Source of the article

GOATReads:Politics

After “Abortion”: A 1966 Book and the World That It Made

“We were all considered slightly cracked, if not outright fanatics, that first year.” —Larry Lader, Abortion II   “Abortion is the dread secret of our society.”1 So began journalist Larry Lader’s controversial book, Abortion, published in 1966 after years of rejection from publishers. If you had told Lader or the mere handful of activists then dedicated to legalizing abortion that a Supreme Court case would overturn anti-abortion laws across the US seven years later—in a January 1973 case named Roe v. Wade—they probably would have laughed. In fact, in the early 1960s when Lader began researching, it was harder to get an abortion in the US than it had been in the early decades of the twentieth century. In 1966, American doctors—who were overwhelmingly white men—tightly controlled women’s reproductive options. And women of color, primarily Black and Latina women, had even fewer choices if they found themselves accidentally pregnant. Nearly 80 percent of all illegal abortion fatalities were women of color—primarily Black and Puerto Rican.2 And, worst of all, as Lader documented, deaths from illegal abortions had doubled in the preceding decade. Before Lader’s book, no one, it seemed, wanted to talk about abortion publicly. But something changed with the 1966 Abortion. For starters, Reader’s Digest—one of the bestselling magazines in the US at the time, with a circulation of millions—excerpted eight pages. This thrust Lader into the limelight, turning him from a journalist into an abortion activist almost overnight. He began receiving hundreds of letters and phone calls to his home from people asking for contacts for abortion providers who would perform the procedure safely. Lader’s wife, Joan Summers Lader, remembers receiving the calls at all hours and worrying that their phone might be tapped. She advised women to write to their home address with their request, because if necessary, letters could be burned. They worked tirelessly to make sure that no woman was turned away and received a safe and affordable abortion, if possible. Because of Abortion, Lader found himself at the center of a burgeoning radical abortion rights movement. Activists, lawyers, religious leaders, and health practitioners from across the country who supported the repeal of abortion laws reached out to him, applauding his book and asking what they could do next to advance their cause. In 1969, with Dr. Lonny Myers and Reverend Canon Don Shaw in Chicago, Lader organized the first national conference dedicated to overturning abortion laws. He insisted that the conference’s efforts should result in a national organization that could continue centralizing the efforts of local repeal groups. On the last day, he chaired a meeting with hundreds of people, trying to bridge differences and prevent shouting matches, mediating between those who wanted to forge ahead and those who were more cautious. At the end, NARAL was founded. In those early days its name stood for the National Association for the Repeal of Abortion Laws, and its ultimate goal was precisely that: to repeal all laws restricting abortion. Six decades ago, Lader’s book launched a movement. In the days before the internet connected people, Abortion served as a link joining activists, doctors, lawyers, clergy people, and others impacted by restrictive abortion laws, who felt deeply that those needed to change. Doctors saw firsthand how anti-abortion laws killed, maimed, and emotionally destroyed women who couldn’t have safe and legal abortions; progressive clergy people understood the trauma inflicted by these laws and how they alienated people from religion; and lawyers saw an opportunity to extend new privacy protections afforded by the 1965 Supreme Court case Griswold v. Connecticut, which finally legalized birth control for married couples. And of course, women, their partners, and their families understood firsthand how anti-abortion laws curtailed their lives and limited their freedom. Still, bringing people together to start a national movement on a controversial subject that rarely received sympathetic attention from the media or politicians was challenging. Abortion, however, proclaimed loudly that all abortion laws should be repealed, that there was no shame in seeking an abortion, and that without legal abortion women would never be free. It was the needed spark to bring together a movement, and Lader embraced his role as its convener. For nearly a century, abortion had been outlawed in every American state. Some of the earliest anti-abortion laws were passed as antipoison measures to protect women who were sold toxic chemicals claimed to be abortifacients. However, by the 1860s, it was clear that most anti-abortion laws intended to control women’s reproduction to keep them in the domestic sphere. Nineteenth-century anti-abortion rhetoric framed abortion as unnatural or as interfering with (white) women’s moral duty to reproduce for the state. Despite these restrictions, however, abortion did not go away. When, in 1962, Lader decided to write about abortion’s history and present-day consequences, he received little support at first. He was a seasoned magazine writer with two published books. Even so, he was surprised that—even with all his connections, even as a white man—it was impossibly difficult to find an editor willing to publish a pro-legal abortion article in a mainstream magazine.3 In letter after letter, he pitched a long-form article that would present his research on the impact of anti-abortion laws to show the harm they caused. No editor was willing. One wrote back that he had become “a little squeamish on the subject,” and he recommended Lader “find an editor with more guts.”4 Rather than give up on the project, Lader decided to dive deeper, and he wrote a book proposal. Abortion would be the first book to explore the history of the procedure in the US and make a case for repealing all abortion laws. It was a radical decision, and Lader knew it would open him up for attack. He also worried that it might kill his career as a magazine writer and ensure he was never again offered an assignment. Still, he decided to forge ahead. More than 12 publishers rejected the proposal before he found an editor willing to give him a book contract with the Bobbs-Merrill Company. In 1964—when Lader was knee-deep in abortion research—there were an estimated 1.1 million abortions a year in the US.5 Only 8,000 of those were legal “therapeutic” abortions, performed in hospitals that were approved by a panel of doctors. In 1955, therapeutic abortions for psychiatric reasons made up 50 percent of all hospital abortions; by 1965, only 19 percent of abortions were given for psychiatric reasons.6 That means that almost all women sought the procedure illegally. Of those 8,000 legal abortions in hospitals, virtually none was given to Black and Puerto Rican women.7 As Lader wrote, there was one abortion law for the rich and one for the poor. In other words, only women with financial means could buy their way to a hospital abortion given under safe and legal conditions. In Abortion, he argued that legal abortions should be available outside the hospital setting—in freestanding abortion clinics—because hospitals too often treated patients as “pawns” and cared more about preserving their reputations than about preserving the health of their patients. He also saw how some hospitals still sterilized patients against their will as the price for agreeing to offer an abortion.8 Abortion not only presented a history of abortion in the US but also functioned as a handbook for people looking to connect with an abortion provider. Lader was careful not to publish names, but he included a chart of how many skilled abortion providers he could locate in 30 states, including Washington, DC. He also explained the practicalities of traveling to Puerto Rico, Mexico, or Japan for an abortion. In another chapter, titled “The Underworld of Abortion,” he explored what happens when abortion is illegal and unregulated. He noted that the victims of illegal abortions are people who can’t afford to travel and so resort to getting abortions from unqualified people—sometimes doctors who had lost their license for alcoholism or other substance abuse issues. Because these abortions were underground, safety standards weren’t always followed; if unsterilized equipment was used, it increased the chance of infection. Some desperate women resorted to self-induced abortions, and although determining the numbers was difficult, one Kinsey study of Black women and of white and Black incarcerated women estimated that 30 percent of abortions were self-induced. These abortions were especially dangerous, often leading to lethal infections or infertility.9 For all these reasons, he argued for the complete repeal of all abortion laws across the US. He believed abortion should be no more regulated than any other routine medical procedure. When Lader became interested in abortion politics, there was a tepid abortion reform movement in New York City, led in part by the gynecologist/obstetrician Alan Guttmacher.10 Guttmacher participated in the first conference organized about abortion by Planned Parenthood’s medical director, Mary Calderone, in 1955. However, following Planned Parenthood’s stance on abortion in the 1950s, the conference focused on abortion to highlight the need for better contraception, and most of the participants supported anti-abortion laws.11 When proceedings from the conference were published in 1958, only Guttmacher’s contribution emphasized the need to liberalize those laws. However, even Guttmacher only supported abortions performed in hospitals, approved by a board of doctors, and limited to cases that merited it because of the woman’s mental or physical health.12 As head of obstetrics and gynecology at Mount Sinai Hospital in New York City, he created a panel of doctors to approve abortions, and the number of approved abortions grew modestly.13 In 1959, Guttmacher—with the help of his twin brother, Manfred—joined with the influential American Law Institute (ALI) to draft the first abortion reform law, which would allow for abortion in cases where continuing the pregnancy would affect the woman’s physical and mental health or if the pregnancy resulted from rape or incest. The law also mandated that two doctors had to approve the abortion. The law lowered the bar and created clearer guidelines for obtaining an abortion. Still, it maintained that abortions should only be performed in hospitals, and women had to submit their request for an abortion to a panel of doctors for approval. In his unpublished memoirs, Lader recounts considering what position to take on abortion. He knew he stood against the laws that imposed severe restrictions, making it virtually impossible for most American women to obtain a legal abortion. But as he set out to write on the topic, he consulted with his wife about how far to go in his argument for legalization. Was setting a limit after 20 weeks of pregnancy reasonable? Should abortions only be performed in hospitals? What about limiting the reasons under which an abortion is permissible? Should it be allowed under all circumstances or for no stated reason at all? Lader knew that arguing for a complete repeal of all abortion laws was radical, given the barely existent conversation about changing the laws at the time. After talking with his wife and remembering how he helped his ex-girlfriend’s friend obtain an illegal abortion, driving her to Pennsylvania, he decided that the terms of legal abortion should never be circumscribed by law.14 Lader subtitled the last chapter of Abortion “The Final Freedom” because he believed that “The ultimate freedom remains the right of every woman to legalized abortion.” He cites Margaret Sanger, the subject of his first book, who argued for birth control under similar terms: a woman cannot call herself free until she can control her own body. Sanger never argued for legal abortion because she naively believed that with accessible and effective birth control the need for it would be obviated. Lader understood that the natural extension of Sanger’s argument is not only legal but also affordable abortion with no strings attached. He presciently recognized—decades before its invention—that if an abortion pill could be invented, it would radically transform abortion access. Lader was not a religious man, but he sought the help of clergy—from the Reverend Howard Moody to the Rabbi Israel Margolies—who supported the repeal of all abortion laws. Recognizing the power of religious leaders to sway public opinion, he ended his book with the rabbi’s words: “Let us help build a world in which no human being enters life unwanted and unloved.” Source of the article

The guitarist’s palette

In the hands of a great musician, the gloriously simple guitar can create the most complex works of art. Here’s how In the hands of a great performer, the classical guitar can mesmerise audiences with its beauty, emotional power and subtlety. The 20th century was dominated by the Spanish guitar legend Andrés Segovia, who took the instrument from the salon to the large concert halls of the world, aided in part by developments in guitar-making that produced louder instruments. A later generation included superb players such as John Williams and Julian Bream (sometimes described as the Apollo and Dionysus of interpretation). Other notable virtuosi, including Ida Presti, David Russell, Pepe Romero, Manuel Barrueco, Roland Dyens, Kazuhito Yamashita, and the brothers Sérgio and Odair Assad, have enchanted listeners around the world with their musicianship. In the 21st century, younger players such as Xuefei Yang, Ana Vidović and Gabriel Bianco are reaching new audiences via YouTube and social media. What’s distinctive about the classical guitar is its simplicity. Ultimately, it’s basically a wooden box with strings attached and a fretted neck, a bridge, a saddle, and tuning pegs. Classical guitar has no inbuilt amplification, and the sounds are produced very directly. While many other instruments are based on that fundamental design, the guitar is simple in that you’re just plucking the strings, as opposed to the relative complexity of bowing a violin, viola or a cello with horsehair that’s been rubbed down with rosin. With the classical guitar, you don’t even use a pick. The sound is created by carefully shaped and maintained fingernails on the plucking hand of the performer, which is overwhelmingly the right hand (many left-handed people play the guitar right-handed). So there is a glorious simplicity to the guitar, yet that belies a complexity: what’s particularly distinctive about it is that numerous frequently used notes can be played in multiple places on the fretboard. The top string of the guitar, the one that carries the melody most of the time, is tuned to an E above middle C, and that exact same note exists not just as the open first string, but also on the fifth fret of the second string, the ninth fret of the third string, and the 14th fret of the fourth string – four playable places in all (in principle, there is also a fifth place, as the note can be played on the 19th fret of the fifth string too, but that’s unlikely to be used). It’s the same pitch, the same note and the same note name, but the tonal quality, texture, flavour or quality of the sound differs on each string that is sounded. Middle C on the piano is a single key on the keyboard, while middle C on the guitar exists in three comfortably usable places. It’s found on the first fret of the second string, the fifth fret of the third string, and the 10th fret on the fourth string, and again they have contrasting tonal qualities or richness. On the guitar, all those notes that can be played in different places have different timbres and they combine differently with other notes that are also replicated in multiple places, to create distinctively different tonal qualities. That’s our palette – it’s what we use to paint music in particular ways on our instrument. Guitarists talk a lot about creating tonal variety, about finding ways of changing the sonic character of the same pitch. Choosing where on the fretboard to play notes and chords has a huge impact on the mood and character of the music being interpreted. With standard tuning, the bottom five notes on the guitar – E, F, F sharp, G, G sharp – can be played only on the open and first four frets of the sixth string. But once you go above the A at the fifth fret on the sixth string (which is the same pitch as the fifth string played open), we have entered into the world of identical pitches playable in more than one location on the fretboard. The most places you would find any single pitch practically usable on the guitar is in four different positions. For many notes, there are three comfortably possible locations. But when you build chords around those, that means that chords can be voiced or played in multiple different positions as well. Open strings have a different timbre from strings that are stopped, so that’s a factor too. If you’re sight-reading on the piano, the notes you’re reading can be sounded only by playing one specific key on the keyboard. But the guitar is very different. Each positional choice you make produces sounds with a different character. It’s not infinitely complex, but complex enough. With that comes a fascinating and wide range of expressive possibilities. So, the guitarist faced with, for instance, a crotchet or a quaver, sees a note at a certain pitch, but that on its own doesn’t determine where to play it. Reading, for example, an E that corresponds to the top E string of the guitar (the top space of the treble clef musical stave), the logical assumption might be that you play this as an open first string. But if, as is commonly the case, you have to play other notes that can be played only on that string at the same time, or overlapping with it, it becomes entertainingly complicated. A whole series of decisions both technical and interpretative come into play. There is a great deal of creative puzzle-solving that goes on when learning a new score. The guitarist also has to decide how long each note will last. If left to itself, the note will die away, but sometimes performers will stop a note from resonating at a certain point, or may let it sound on for longer than the official duration that’s written into the score. Guitar notation is notoriously vague when it comes to specifying how long certain notes should, or shouldn’t, be sustained. How the notes fade away can be part of the beauty of a performance, as can silences within a piece, or at the very end, after the final note or chord has died away – those few magic seconds before the guitarist looks up to take applause. The guitar can produce sounds that are bright, tinny, thin, rich, sonorous, percussive, soupy, plummy, sweet, seductive, harsh, and so on – classical guitarists speaking to one another can sound like wine experts who know what they mean by ‘chalky’, ‘fruity’ and ‘cheeky’. These qualities are not just the result of the string and fret choice. The way the guitarist plucks the string matters too, the angle of attack of the fingernail and the strength of the finger movements. There are also two very distinct ways of striking the string, known as rest stroke and free stroke, sometimes referred to using the Spanish terms apoyando and tirando. With a free stroke (tirando), you pass your carefully crafted fingernail across the string, following through above the remaining strings towards the palm of your hand, whereas with a rest stroke (apoyando) you push down into the guitar with the fingertip coming to rest on the adjacent string, so that the string vibrates in a different direction – more up and down rather than straight across the body of the instrument. This typically produces a rounder, richer and usually slightly louder sound that is audibly different from tirando. This is especially noticeable in recordings that, in effect, courtesy of close microphone placements, put the ear of the listener closer to the guitar than would normally be the case in a concert hall. The apoyando or rest stroke is usually richer in tone, and is used for emphasis or accent, often to bring out a line of melody. The tirando or free stroke is typically lighter and can be nimbler. If you listen to a scale played free stroke, and then the same scale played rest stroke, there is an obvious difference: there is a slightly more percussive, less smooth or legato effect to playing scales with repeated rest strokes, an effect often exploited in flamenco and flamenco-influenced music. When it comes to playing chords, these can be plucked together simultaneously, or arpeggiated (when the notes of a chord are sounded individually, in a rising or descending order); they can be strummed with a thumb or finger, or sounded in more complex rasgueado rhythmic, multi-finger strumming patterns that can also involve slapping the strings with the open hand. Some guitar composers also use pizzicato. For bowed string instruments, pizzicato indicates that the strings are to be plucked rather than bowed – in that sense, the guitar is played pizzicato most of the time. Hence, on the guitar, pizzicato refers to a muffled shortening of notes produced by resting the fleshy side of the palm of the right hand on the strings just inside the bridge to create a staccato note that has a warm popping sound and is used to beautiful contrast in many pieces, such as in the opening passage of the guitar transcription of ‘La Maja de Goya’ (1911) by the Spanish composer Enrique Granados. Some composers also borrow the golpe from flamenco, a technique that requires the performer to strike the top of the guitar with the hand, thumb or fingers, using it like a drum. Harmonics are another possibility. These are of two kinds. Natural harmonics are produced by the guitarist placing a finger lightly on a string at a node point, eg, at the 12th fret, which sets the string resonating in two halves producing a very beautiful, quiet, distinctive, pure, bell-like note. Natural harmonics can be produced for all strings at the 12th, seventh and fifth frets relatively easily, but they can also work on other frets in the hands of a skilled performer. Then there are artificial harmonics in which the guitarist uses the tip of the first finger on the plucking hand to touch the node while simultaneously plucking the string with the second or third finger or thumb of the same hand. The hand on the fretboard then holds down different notes while the right hand moves to the corresponding nodal point on the string. The combination of natural and artificial harmonics allows guitarists to play melodies entirely in harmonics, often accompanied by gently played bass notes that are not harmonics. The Romantic French composer Hector Berlioz, who also played the guitar, is supposed to have said: ‘The guitar is a small orchestra,’ a quotation that has also been attributed to the earlier German composer Ludwig van Beethoven. In part, this saying alludes to the guitar’s possibility of playing polyphonic music with several distinct lines, but particularly to the variety of tone colour that the guitarist’s palette offers. And so, when considering timbre and tonal variety on the guitar and where you might choose to play certain phrases, this sense that you have an orchestral range of colours at your disposal courtesy of those six strings of different thicknesses and tensions is important. You can imagine colour bursting out of the instrument with each string having its own shades of a particular hue. And then those colours change as the guitarist’s hand moves around the fretboard. Melody is that portion of music that people tend to connect with most directly. Very few audience members will go away singing bass lines or accompaniments. Our ears naturally focus, in an orchestral context, on what the first violins often play – lovely soaring melodies, as do some of the wind instruments, but obviously these aspects of instrumentation are fascinatingly variable. On the guitar, our top E string is the equivalent of the first violins in the orchestra: it very often carries the melody. If we think of a popular and well-known tune, such as Stanley Myers’s ‘Cavatina’ (1970) – the theme from the movie The Deer Hunter (1978) – that melody is predominantly on the top string throughout. The melody sings above the accompanying arpeggio with the bass line below. With a simple piece that consists of melody, accompaniment and bass line, the upper strings and the lower strings tend to perform distinct functions. Melody is usually on the top strings, bass on the bottom, and in the middle you might have some arpeggios or rhythmic figures that provide harmonic context and maintain rhythmic flow and which are subdued in volume compared with the melody and bass. The high E string of the guitar, the first string, tends to grab most of an audience’s attention as its key role is frequently melodic. By comparison, the sixth string, tuned to an E two octaves below it, represents the double bass of the orchestra as it carries a lot of the bass lines. In a modern classical guitar, there are also significant tonal differences between the strings, because the top three strings are made of smooth nylon or carbon composite material, while the three lower bass strings are wound with metal. For centuries, the classical guitar’s strings were made from sheep or cow gut but, since the 1940s, the top three strings began to be made from nylon, with the bottom three strings from a multi-filament nylon core wound in silver-plated copper. For a while, string manufacturers would buy the raw material from fishing-line manufacturers, but treble strings made of fluorocarbon polymers are increasingly popular these days. Ever since 1948, when the string manufacturer Albert Augustine discovered that nylon was an effective alternative to gut, this synthetic substance has been the main material used for the top three strings, right through to the present day. In the 21st century, string-makers started to use a special polymer called polyvinylidene fluoride (PVDF), also known as fluorocarbon, although string manufacturers commonly refer to them simply as ‘carbon’. What makes these ‘carbon’ strings so special is that they have physical qualities that result in a brighter and more projecting tone. Andrés Segovia played on gut strings for the early part of his career from the 1910s, and that had a distinctive sound – rounder, but also somewhat duller than the nylon strings that he was central to developing with Augustine, and that performers such as Julian Bream and John Williams used in the 1960s when the classical guitar became an extremely popular instrument. Segovia’s influence over the development of classical guitar-playing was immense, but he used to finger his music in a distinctive way that is now, arguably, less in fashion. If he could choose between playing notes in higher positions, such as the seventh fret, or in a low position closer to the head of the guitar, he tended to play higher on the fretboard. When played on modern guitar strings, many of the pieces in Segovia’s repertoire sound more lyrical and brighter if played in lower positions and, as a consequence, performing guitarists today often use quite different fingerings from those that Segovia wrote into his printed scores. Sometimes, the distinctive sound of the lower strings is exploited by composers who write melodies using the metal-wound fourth, fifth and sixth strings. The orchestral analogy is a good one because there are many instances in musical history where the cellos in particular (which you could say are loosely represented by the fourth and fifth strings on the guitar) are given rich and prominent melodies to play. A particularly good example of this, well known to many guitarists, would be Prelude No 1 (1940) by Heitor Villa-Lobos, which opens with an extended passage of melody that uses the fifth and fourth strings, supported by a bass note on the sixth string, with accompaniment chords played on the top three treble strings. Here, the typical structure of music with which we started is inverted. The sustained melody is now on the richly resonating bass, and the accompaniment is in the treble. A virtuoso guitarist is fully aware of all these things going on simultaneously, and will adjust, as a matter of personal interpretation, the tone colour for a particular passage, perhaps by varying repeated material (‘Never play the same thing the same way twice’ is one of our popular maxims) or just by the use of tonal variety to characterise certain passages of music. It is very common to play something in a low position and then, if you have to play the same passage again in a repeat or echo, to finger it in a higher position, to produce a warmer, richer version of it. One other particularly potent tool available to guitarists is to play close to the bridge on the guitar, which is called sul ponticello, or away from the bridge (sul tasto), sometimes also referred to as dolce (sweet). The tonal contrasts that can be created by plucking near the bridge or away from it are greater on the classical guitar than on any other instrument. With bowed instruments, you can also play near the bridge, which is squeakier and lighter, or produce a richer, fuller timbre away from the bridge but, with the guitar, the effect is more pronounced. Bream was a great exponent of this type of tonal contrast. With some musical instruments, it is quite easy to change dynamics – how loudly or softly the performer plays. The guitar is not the easiest of instruments in this respect, and one of the most pervasive criticisms of student guitarists is a lack of dynamic variety. The received opinion about the guitar is that it has a more compact dynamic range than many of the instruments that we listen to in the classical world, which also largely explains its absence in the vast bulk of orchestral repertoire as it cannot effectively compete with the volume produced by most orchestral instruments individually, let alone when they are playing together. The guitar that I usually play, built by the Australian guitar-maker Greg Smallman, was designed to produce more resonance and volume than guitars built earlier in the 20th century. Its design allows a wider dynamic range than many more traditional instruments, and it goes some way to negating the need for amplification, though I do sometimes use amplification, particularly when playing in ensembles with louder instruments such as the piano, saxophone, accordion, or in chamber groups with mixed instrumentation. The challenge on the guitar is to create the impression of a large dynamic range by sometimes playing pianissimo (very, very quietly) – in fact, the guitar can be played incredibly quietly and to great effect – and then also to exploit the full resources of the instrument to achieve the dramatic impact of playing fortissimo (very loud!) It’s just a question of what is musically appropriate, and is also partially determined by the acoustics of the venue. Performers tend to gauge this once we’re in a space, while also feeding off the audience response as we play. The wonderful thing about quieter dynamics on any instrument, but particularly on an instrument as intimate as the guitar, is that you can really draw the listeners in, make them almost literally lean forward to hear. But the guitar can be played aggressively too, creating a huge dramatic impact with the impression of substantial volume, sometimes with the aid of percussive effects (hitting or slapping the guitar) or using what’s known as Bartók pizzicato, where the string is pulled away and then released to slap aggressively against the fretboard. When an experienced performer blends these different techniques, the audience may be moved by the beauty, rightness and subtlety of the result without necessarily being aware of the choices the guitarist has made. In the hands of a great musician, this simple instrument embodies expressive possibilities that are almost limitless – we literally sculpt with sound. I have spent many hours of my life with my guitar and don’t regret a second of that. In practice, toying with the full palette of colours the guitar has to offer brings immense pleasure and individualises interpretations of even the best-known pieces. Having the opportunity to entertain people and strive to make them feel what I feel about the music I play is a great privilege. Source of the article

Fracking has transformed an Argentine town but what about the nation?

Mechanic Fabio Javier Jiménez found himself in the right place at the right time. When his father moved their family-owned tyre repair shop to the rural Argentine town of Añelo, it was a small, sleepy place, some 1,000km (600 miles) southwest of Buenos Aires. There was no mains water or gas, and the electricity supply was constantly being cut off. Then in 2014, fracking for oil and gas started in the surrounding region, and the conurbation boomed. "We set up the tyre repair shop in the middle of the sand dunes, far from the town centre," says Mr Jiménez. "Then the town grew and passed us by." Fuelled by its new-found energy wealth, Añelo's population soared from 10,788 in 2010 to 17,893 in 2022, an increase of more than 60%. In addition, Añelo sees some 15,000 workers enter the town each week day. This has made the roads very busy, including lots of oil tankers going through. Last year, 24,956 vehicles entered the town every day, of which 6,400 were lorries, official figures showed. Mr Jiménez's workshop on the main provincial road is there to help any that need new tyres. Añelo is located in the heart of Vaca Muerta, a 30,000 sq km (12,000 sq mi) oil and gas-rich geological formation. It was first discovered as far back as 1931, but it wasn't until fracking became legal in Argentina in 2014 that the deposits could be commercially accessed. Fracking is a method of mining that first became widespread in the US in the early 2000s, whereby a high-pressure mixture of water, sand and chemicals is injected into the ground. This cracks or fractures the rock, allowing the gas or oil trapped inside to be brought to the surface. The first fracking operation in Vaca Muerta was a joint operation between Argentina's majority state-owned oil firm Yacimientos Petrolíferos Fiscales (YPF) and US giant Chevron. By February of this year, there were 3,358 wells in active production in Vaca Muerta, according to the Argentine Institute of Oil and Gas. Of these, 1,632 are oil, and 1,726 are gas. This accounts "for more than half of Argentina's oil and gas production", says Nicolás Gadano, chief economist at the Empiria consultancy and a former YPF official. He adds that the cost of the fracked oil is cheaper than conventional extraction elsewhere in Argentina, because the latter now comprises very old deposits where the remaining oil is hard to get to. Nicolás Gandini, director of Econojournal, a media outlet specialising in energy, agrees. "We have not been able to find new conventional deposits that are very cost-competitive, with the exception of conventional gas deposits in the offshore southern basin," he says. "All other onshore deposits are three to four times more expensive than Vaca Muerta." The oil and gas from Vaca Muerta has given Argentina energy self-sufficiency, overturning decades of shortages and the need for expensive imports. It has also allowed Argentina to export oil and gas, helping it to earn foreign currency. "Last year, there was a significant external surplus in the energy sector of $6bn [£4.6bn]," says Mr Gadano. "This year, we are aiming for a similar figure, with much higher volume but lower prices due to the drop in international prices." Mr Gandini adds that the fact Argentina is now exporting more energy than it imports "is very important" for the country, "especially when two or three years ago we were in the red". Yet he adds that it won't be "the panacea" that cures an Argentine economy that has long battled high inflation and public spending, and defaulting on its national debt. "I think there is an overrepresentation of the value that Vaca Muerta can bring to solving the structural problems facing the Argentine economy," he says. "However, if one looks at what Argentina has today to generate more dollars, it does not have many sectors other than Vaca Muerta. It has agriculture, but agriculture also has its problems: the country has not been able to expand its agricultural production base. Beyond agriculture, mining lags far behind." Other commentators argue that oil and gas extraction from Vaca Muerta is being held back from reaching its full potential because Argentina's bad credit rating is putting off international investors. They also point to strict limits on how many pesos that firms can exchange into foreign currencies. This has long been the case to curb the flight of capital out of the country, and to protect the reserves of the Argentina central bank. "Companies say 'everything is fine with Vaca Muerta, but I haven't been able to get a single dollar out of Argentina for 15 years, so we make money but we have to reinvest it there by force'," explains Mr Gadano. "That's not how the world works, that's not how companies work, especially the big international players." The government of President Javier Milei lifted foreign exchange controls for individuals last April, and following his party's victory in mid-term elections last month, it is expected that restrictions on companies may soon also be lifted. Other critics say that Vaca Muerta is being hampered by insufficient pipelines, poor roads and the lack of a railway connection. Gustavo Medele is energy minister of Neuquén Province, where the town of Añelo and much of Vaca Muerta is found. He says that the provincial government "is doing what it has to do and what it can do". What continues to help Vaca Muerta is that it has achieved a political consensus – all main parties support increased mining. "All the relevant political forces agree that this is an industry that needs to grow," notes Mr Gadano. This consensus has become a problem for those who, since the start of fracking at Vaca Muerta, have voiced their environmental concerns. "We are really losing in the public debate," says Fernando Cabrera, director of environmental pressure group Observatorio Petrolero Sur. "There is a very noticeable difference in the capacity for public and media influence; provincial legislatures are largely in favour of exploitation, as are the national chambers, so it is a very uneven dynamic." Back at Mr Jiménez's garage, business is so good that he has opened a second branch. "When we came to Añelo, we were happy to service two vehicles a day. Then we serviced 10 vehicles, and now we have 20 vehicles a day." Yet he is sceptical that oil and gas exploitation will be the solution to all the country's problems. "Yes there will surely be oil and gas for many years to come, but that does not mean that Argentina will not continue to experience economic and political ups and downs." Source of the article

How to Lead When Things Feel Increasingly Out of Control

A few weeks ago, a senior manager at a global technology company we work with burst into tears mid-meeting. For months, she had been fighting fires and chasing one AI update after another, rewriting roadmaps every week as new tools arrived. That same morning, she had stepped out of a call where the CFO confirmed that a restructuring would almost certainly eliminate many of her team members’ roles. Minutes later, one of her direct reports had asked her, “Am I going to have a job in six months?” By the time she joined our leadership session, the weight of pretending she had answers had become too much, and the emotions spilled out. That moment captures something larger. Strategy once felt like running a marathon on a clear day. Now it feels like sprinting through the fog while the track shifts beneath your feet. Leaders everywhere are confronting the same reality that things increasingly feel out of control. Three distinct forces are colliding to create this pervasive sense of fear and uncertainty. In this article, we’ll discuss those forces, how fear affects how you lead, and how to respond. Three Engines of Leaders’ Fears The forces driving today’s fears are familiar, but the rules for managing through them are being rewritten in real time thanks to the high volume and fast pace of change. Consider the effects of: Policy volatility. Leaders must now navigate shifting tariff regimes, abrupt regulatory changes, the risk of public clashes with politicians, evolving H-1B and immigration rules, and sudden trade embargoes. These shocks can arrive with little warning. A policy announced on social media can change hiring plans overnight. New tariffs can disrupt supply chains and product roadmaps. A single post can make or break stock prices within hours. These are no longer rare events; they’re rhythms of disruption: constant, ambient volatility that reshapes decisions about people, operations, and capital every single week. An AI-saturated world. AI is infiltrating every workflow, every product, every decision. The questions it raises feel existential: What does “work” even mean when machines perform the thinking? Which functions should be redesigned, augmented, or replaced? For many workers, the line between being augmented and being replaced has never felt thinner. Geopolitical fragmentation. The global map is fracturing. The single, integrated system of the past is splintering into rival blocs and regional hubs. Trade barriers and sanctions are rising. Movement of capital, precious hardware, data, and talent faces more restrictions. How should a firm position itself across regions with different rules and risks? How Fear Distorts Leadership Fear changes the brain before it changes behavior. Unchecked fear does more than paralyze people; it reprograms priorities. Research in neuroscience suggests that acute stress shifts brain resources toward threat detection, narrowing perception and draining creative capacity. Instead of scanning for opportunities, the mind locks onto threats. In this mode, we’ve seen leaders default to firefighting. They fix the urgent and delay the important. Experiments stall because they feel unsafe. Imagination shrivels up. As a result, managers begin trading long-term potential for short-term gains and protection. We’ve seen three clear patterns emerge: Decision deferral disguised as prudence. Leaders wait “one more quarter” for clarity that never arrives. Hiring and capital expenditures keep getting delayed. Over-indexing on control. Fear breeds micromanagement. Checklists replace principles; compliance replaces curiosity. The conversation shifts from creating value to avoiding loss, and initiative disappears in the process. Narrative drift. When fear takes over, the story unravels. Without a vivid, durable vision, teams do what’s safest for their own units, often at the expense of the wider narrative. Activity increases while direction fades. The company gets busier, but aimless and emptier of meaning. Paradoxically, when uncertainty compounds, the more valuable clarity becomes. Fear feeds chaos, but leadership must feed coherence. The task now is not to eliminate fear, but to convert it into focus. That begins with how leaders design their systems—and their own time. How to Respond to Fear In our work with CEOs, boards, and executive teams across several industries, we’ve uncovered five practical steps to address these shifts while preserving imagination, morale, and momentum: 1. Build a policy intelligence system, not a rumor mill. When policy shifts come by social post, panic spreads faster than facts. The antidote is to create a structure that systematically processes new information. This is how leaders turn anxiety into intelligence: Create a cross-functional policy desk. Include legal, government affairs, compliance, supply chain, finance, and HR. Meet weekly. Write a brief. Outline what happened (for example, a new regulation, investigation, or policy proposal), what is probable, and what is confirmed. Identify what, if anything, should change in operations, hiring, pricing, or sourcing. Set response thresholds. Define specific triggers—such as a law banning imports of key materials, a tariff crossing a threshold, or a deadline being set—that justify action. Avoid whiplash from every post or speech. Close the loop. Track which signals turned into real rules and which didn’t. Use that record to inform future responses. At one company we worked with, a global leader in streaming content, leaders created a simple, weekly “signal brief.” A team regularly scanned new regulations, court rulings, and political statements, then summarized what was “noise” vs. what might become a law relevant to their business. Every Monday, the executive team reviewed the brief and selected from just three options: no action, prepare, or act now. This led to fewer panicked email chains, clearer ownership, and a documented history of which shocks mattered to the business. The structure didn’t remove uncertainty, but it contained panicked reactions to it. The result is a calmer organization that responds to facts rather than fear or rumors. 2. Default to real options, not binary bets. “Binary bets” are all-or-nothing commitments: single, large investments that assume the world will behave exactly as planned. Those are dangerous acts of faith in a volatile environment. Real options, by contrast, are small, staged investments that let you learn from the market before you commit more resources. They limit how much you can lose on any single move while keeping the possibility of bigger wins open. To implement them: Stage commitments. Break big initiatives into milestones with learn-then-spend checkpoints where you assess results before releasing more budget.  Release capital only when signals improve. Tie funding to concrete evidence, such as early customer usage, unit economics, or risk indicators, rather than hope. Run pilots and proofs of concept. Test new products, services, or processes in the market and in operations, not only on slides. Value flexibility. Compare the benefit of waiting with the benefit of moving now. In some cases, fund two small but competing pilots to learn faster. We worked with a product organization building AI-powered features in a competitive marketplace. The founding team had been stuck in a debate about a single, large platform bet. Rather than choosing one winner in advance and putting all resources behind it, they funded three small pilots in different customer segments, each with a clear learning goal and a time-boxed budget. One pilot failed quickly, one evolved into something entirely different, and one showed strong promise and was later scaled up. Because leadership treated each experiment as an option instead of a commitment, the team moved faster with less fear of being wrong and greater focus on what could be learned from the dynamic market. This approach converts unknowns into structured bets and keeps the company moving without overexposure and overcommitment. 3. Create an AI operating doctrine. AI is not a single tool, but a menu of capabilities that will rewire processes and products. Leaders need a simple doctrine that guides its adoption while reducing fear in their employees: Clarify where AI augments work today and where it may replace teams. Be direct and humane. For example, be transparent with employees about which roles will change, how decisions will be made, and what support people will receive. Map product risks and opportunities. Where can AI enhance an offer, and where could it disrupt or commoditize it? Appoint AI champions in every function. Give them license to run safe experiments, share lessons, and coordinate standards. Set boundaries and guardrails. Define data controls, model-selection criteria, testing protocols, and human-in-the-loop checkpoints. Create pathways for skills. Offer new learning tracks, workflow redesign, and model governance. At a large, global retail company, the CEO heard growing anxiety about AI replacing middle-office roles. Rather than letting fear grow in the shadows, leadership published a one-page AI doctrine. It spelled out three “red lines” (what AI would not be used for), three priority use cases (where AI would assist workers), and a clear commitment to reskill before any role redesign or overhaul. Business heads nominated AI stewards in each function to run small experiments and share outcomes in a monthly forum. The effect was not to remove all fear, but to replace rumors with a shared, evolving blueprint. Clarity and guardrails make experimentation faster, safer, and more compliant, which reduces fear and accelerates value. 4. Protect leadership vision time like a critical asset. Fear steals the scarcest resource in any company: attention. When every hour becomes crisis time, strategy suffers. Leaders must rethink their workdays to allow time for strategic thinking: Schedule fixed blocks of time for strategy. Treat them as immovable. Use them for long-horizon choices, design reviews, and portfolio shaping. Separate operating diagnostics from vision. Don’t let incident reviews consume strategy sessions. Build reflection into calendars. Leaders need space for reading, thinking, and renewal. Tired brains never design bold futures. Model the behavior. If the CEO protects vision time, others will follow. The president of a major book publishing imprint realized that her entire week had devolved into incident calls, stakeholder management, and internal team escalations. She redesigned her calendar so that two mornings a week were blocked for strategy: no status updates, no crisis meetings, no email. Those blocks were used for reviewing the portfolio, debating long-term bets with a small group, meeting external experts, and developing future roadmaps. She also asked her direct reports to create their own “vision blocks” and report monthly on what decisions had emerged from that time. The content of the work didn’t change overnight, but the message was clear: Designing the future isn’t an extracurricular activity, but part of the job itself. The organization takes its cues from the top. Guarding where attention goes is leadership in action. 5. Strengthen geopolitical and supply-chain resilience, together. Geopolitics is not background noise. Treat it as a shifting stage that demands active design: Aim for resilience with thoughtful redundancy. Diversify suppliers, manufacturing sites, data hosting, and talent sources. Segment supply chains. Build regional footprints where appropriate to reduce exposure to cross-border shocks. Run “war games” and stress tests. Simulate embargoes, policy shifts, cyber incidents, and export controls. Pre-plan rerouting of shipments and substitution of suppliers, materials, or routes. Align resilience with cost and service. Explain trade-offs clearly. Some redundancy is an insurance premium worth paying. We worked with one global materials manufacturer that convened for quarterly geopolitical drills. In each session, a small executive team walked through scenarios such as sudden export or import control on a critical component, a regional data localization law, or a cyberattack on a logistics partner. For each scenario, they identified a primary response, a backup plan, and the cost of each option. Finance leads sat at the table with the operations and communications teams to make the trade-offs explicit. Then, when a real import restriction eventually hit one of the key materials, they were prepared to pivot. Production still took a hit, but the business avoided a full shutdown because the decisions had, in effect, been pre-planned and made. Resilience is not inefficiency, but foresight that’s baked into tangible plans. The CEO’s Role: Courage Over Certainty The age of fear is real, but paralysis is not destiny. This time calls for leaders who admit what they can’t predict, but who plan anyway. Their job is to maintain focus amid operational chaos and to build resilience without abandoning ambition. Employees don’t expect the CEO to predict every twist. They want honesty about uncertainty and a story that connects daily work to a durable mission. Trust then becomes a vital currency. Employees, investors, and customers look for more than financial results. They look for psychological safety in turbulent times. CEOs who practice empathy, clarity, and transparency build cultures that can ride out waves of disruption. Leaders stand at a fork in the road: One path leads to permanent firefighting, reacting to daily policy shocks and long-term technological and geopolitical change. The other path leads to building new systems, skills, and mindsets that restore the core of leadership: vision. Great leadership today won’t be measured by the absence of fear, but by the ability to transform it into clarity, courage, and shared purpose amid uncertainty. Source of the article

Are Plastic Cutting Boards Safe?

Long ago, humans chopped and ground meats and vegetables on natural surfaces like rocks. Eventually, we decided to trade these slab stones for wooden cutting boards. More recently, many home chefs, restaurants, and food producers have switched to plastic boards for convenience, lighter weight, and cost-effectiveness. But recent research points to a potential downside: the cutting action of knives causes plastic boards to release tiny pieces, called microplastics, into the chopped-up food. Whether these fragments of plastic affect health likely depends on many factors that continue to be studied. Here’s what researchers say about plastic boards and whether you should replace them with another material. What happens to the plastic in your cutting board? Emerging research suggests that when people consume microplastics from various sources, such as plastic water bottles, they could get absorbed by the body’s tissues. Some scientists think such absorption may lead to chronic inflammation and oxidative stress, issues that increase the risk of health problems.  However, some microplastics, including ones from plastic cutting boards, might be too large for our bodies to absorb.  Studies demonstrate that when we cut upon plastic boards, microplastics are produced and mix into the food. A single knife stroke can release 100-300 microplastics, according to one analysis. Research has shown that about 50% of the released microplastics stay on the cutting board after chopping and go down the drain when the board is washed (good for you, perhaps, but not great for wastewater pollution). The other 50%, we consume. In 2023, a team of scientists at North Dakota State University found microplastics were released into carrots after being chopped on plastic boards. Based on their lab work, the team projected significant exposure to microplastics from regular use of plastic boards for a year. But they looked only for relatively large microplastics, says Syeed Iskander, assistant professor of environmental engineering and the study’s corresponding author.  According to some research, only the tiniest microplastics could enter liver cells and cause changes in human colon cells. Other papers have speculated more generally that only microplastics smaller than 10 microns can be taken up by the body’s organs. (A micron is one-thousandth of a millimeter.) Larger pieces might pass through the digestive tract harmlessly. Iskander thinks that, had his team’s methods allowed them to observe smaller microplastics, they probably would have found many. In another study, researchers in the United Arab Emirates looked at microplastic contamination of raw cut fish and chicken on plastic cutting boards used by butchers. They found only particles 15.6 microns and bigger (though butchers’ forceful chopping may produce different microplastic sizes than home prep). “The size distribution is important when it comes to health because that really governs whether this material is just going to pass through the body or will permeate it,” says Stephanie Wright, associate professor at Imperial College London who studies microplastics and health. Wright adds the microplastics found in the North Dakota and UAE studies “would typically be considered too large to cross the gut” into the rest of the body. The UAE study also found that washing the food after it had been chopped—for one minute with running tap water—removed small amounts of microplastics, but the vast majority stuck to the food, says Thies Thiemann, the study’s corresponding author, a chemistry professor at United Arab Emirates University. Research on particle size isn’t settled. According to some studies, larger microplastics can move through the body’s barriers. Microplastics may pose a risk regardless of size Size may be just one determinant of whether microplastics from cutting boards affect health. Some scientists say the chemicals from microplastics could still cause problems, even if the microplastics themselves pass through and out of the body.  Heat is being studied as a factor. After chopped food is mixed with microplastics, it often goes to the oven, stove, or microwave. Because microplastics contain many chemical additives and have a low melting point, “they may break down and release these chemicals, especially if cooked at high temperatures,” Iskander explains. “The chemicals can readily end up in our blood.” During frying or pressure cooking, “heat will certainly encourage migration of chemical additives out of the plastic,” Wright says. She adds that cooking oils and fatty meats further promote this migration. The same issue happens in reverse when food is cooked whole and then chopped and scraped while still steaming on plastic boards—a common practice at restaurants, Iskander notes. Research hasn’t directly connected the use of plastic cutting boards to human health impacts, but it has been explored in animals. This year, scientists in China fed mice diets prepared on boards made of different plastic types. Another group ate food made on wood boards. After a few months of this, the wood-board group was doing fine, but the mice with food cut on plastic boards had more intestinal inflammation and disturbed gut bacteria. This held true even though no microplastics were found in the mouse bodies, suggesting the chemicals released by the microplastics may have been responsible.  The authors emphasized their findings don’t directly apply to humans. They also noted the mice were purposely given high doses of microplastics to simulate one year of exposure, but over a relatively short period of time. In the future, lower doses should be studied. “It’s hard to extrapolate animal research to much lower exposures every day,” says Wright, who did not work on the study. In our kitchens, exposure levels can vary based on additional factors, such as how vigorously you chop—firmer food demands more forceful knife strokes—and your frequency of chopping. (Buying pre-made, ultra-processed food to avoid chopping isn’t the answer; studies consistently find ultra-processed foods contain the most microplastics.)  Another issue is how long you’ve had your board. The UAE researchers found plastic boards released more microplastics as they wore down with increased usage. “Repeat behaviors and repeat exposures are probably quite important when we think about long-term health outcomes,” Wright says. Plastics and chemical additives used in cutting boards sold in the U.S. must meet safety requirements of the U.S. Food and Drug Administration (FDA) for “reasonable certainty of no harm,” says Kimberly Wise White, vice president of regulatory and scientific affairs at the American Chemistry Council, a trade association. “This means the [plastic] polymer used to make the board must comply, as well as any additives,” White says. The U.S. Department of Agriculture (USDA) advises on its website that plastic cutting boards can be used “without the worry of impacting one’s health.”  But research on microplastics is nascent. The World Health Organization is prioritizing the need to address their “known and predicted health risks.” The European Food Safety Authority says more research is needed, partly because many studies are thought to have overestimated microplastic amounts through flawed measurement. “There’s a lot of uncertainty,” Iskander says. Cutting board alternatives If you want to move on from plastic boards, one alternative is wood. However, switching to wood cuts both ways. It involves its own issues and concerns. Wood is easier on knives than another cutting-board material, titanium, but a potential problem is microbial growth. Wood boards have surfaces with pores that take in moisture and bits of food, letting bacteria penetrate and hang out, potentially leading to cross-contamination.  Ben Chapman, department head of agricultural and health sciences at North Carolina State University—whose podcast Risky or Not? analyzes everyday risks from germs— thinks the risk is low if boards are cleaned after every use. Any leftover bacteria “will probably die as they get trapped deep in the cracks,” he says. Without such washing, you could become one of 48 million cases of food-borne illness annually in the U.S. If you haven’t gotten sick yet, that doesn’t prove invincibility. “The risk of acute illness is a probability game,” Chapman says, depending on the exposure type and timing. Plastic beats wood on convenience, especially when it comes to cleanliness. The dishwasher would destroy wood boards, whereas plastic is dishwasher-friendly. Wood must be washed by hand: first with soap to remove debris, followed by a food-safe sanitizer, Chapman recommends. He uses a plastic board for raw meats and wood boards for everything else. As with plastic, wood boards have to be replaced every few years, when they start falling apart or form dark lines as bacteria accumulate, Chapman says. Increase their longevity by sanding down the biofilm lurking on the surface. Chapman sands his board occasionally to remove this top layer of funk. Wood boards shed microparticles of wood during cutting. However, Chapman notes wood is “essentially plant-based,” so our digestive systems should have no trouble handling these tiny bits. Another potential problem: most cutting boards are glued together from many pieces of wood. Some glues may leach toxic compounds over time. As with plastic boards, though, these additives must be FDA-approved for food contact.  Other (more expensive) versions are made of a single solid piece of wood, Thiemann says. No microplastics, glues, or mixed wood materials could mean fewer mixed feelings about your cutting board.  Source of the article

How Patriarchy Undermined the Roman Republic

The story of the fall of the Roman republic involves dysfunctional government, political selfishness, and constitutional collapse, played by the usual actors in togas, famously among them Cicero and Caesar. It also, unexpectedly, offers an overlooked but important lesson about how women’s history affects everyone’s history in ways that deserve to be remembered. When the first emperor of Rome, Caesar Augustus, rose to power, sweeping away legal norms and enacting the “Law of Three Children,” sometime between 18 B.C. and 9 A.D., the legislation prevented all wealthy, free-born women from claiming their rightful inheritances unless they had given birth three times. Fiercely independent women who had begun to find a voice in public life, thanks to the generous dowries or the estates they inherited, were expected to be mothers and child-bearers first and activists second—if at all. As the Roman republic’s social and political breakdown quickened, decades of progress in women’s self-determination, emancipation, and participation in public life were erased. The health of the republic suffered because of it. The Roman republic’s carefully calibrated framework of legislative, judicial, and executive action had long been, in practice, a misogynistic, patriarchal, oligarchical swamp. From its founding in 509 B.C., young men were doted on as promising scions of the house. Girls were given a version of their dad’s name. Colorless adjectives differentiated any female siblings: First, Second, or Third. They were forced to learn about chastity as young girls and fidelity when they matured and became wives. Marriage contracts could be severe, with a man’s legal control extending over his entire household. Roman wives replaced their own family’s name with their husband’s first name, signifying by a quirk of Latin grammar that she “belonged to” him—that is, was “in his possession.” The men of the republic, who called themselves their society’s “Chosen Fathers,” enforced this two-tiered society through strict voting laws and limits on women’s autonomy. Heavily manipulated voting districts ensured that only the voices of the senatorial elite, Rome’s self-proclaimed optimates, or “best men,” dominated, not progressive champions, freed slaves, or newly-enfranchised citizens. No woman could run for higher office. Women could neither sit on juries, nor exercise their vote. “As soon as women become the equals of men,” the statesman and senator Cato the Elder said in 212 B.C., “they will have become our masters.” Yet as Rome’s republic expanded beyond the capital city, beyond Italy, and gradually acquired its Mediterranean empire, stories of a different sort of woman reset women’s expectations at home. In the eastern Mediterranean, highly educated woman philosophers, avant-garde poets, and above all, the fearless Greek-speaking queens of Egypt, including Cleopatra, held sway. Inspired by these role models across Europe, Africa, and Asia, Roman woman began to challenge the republic’s inequities and ideologies and claim their voices in the male-dominated republic. Grandmothers and mothers taught their daughters to read and cultivate their intellectual talents. An educated girl, the new wave of educators argued, knew how to assert herself against a man who “swaggers through the city acting like a tyrant.” Cato’s quotation comes from a pivotal moment when women and their allies poured into the streets to demand the repeal of a war-time-era tax on their savings. Other women were political leaders who earned the scorn of their contemporaries. Some were erased or forgotten. In one case, the life of an upper-class woman and contemporary of Julius Caesar, Clodia, saw her reputation destroyed by false claims of harlotry, home-wrecking, and husband-killing. Clodia, an unapologetic champion for expanded voting rights for the enfranchised men of Italy, bravely went before an all-male jury in the center of the Roman Forum in April 56 B.C., as the prosecution’s star witness to testify against her day’s runway, endemic corruption. Instead of defending his client from the charges, however, the leading defense attorney, Marcus Tullius Cicero, turned the case into a referendum on Clodia’s character. Transforming Clodia into the trial’s villain, the speech, the Pro Caelio, outlasted Rome’s fall. It has been taught in high school and college classrooms for two millennia as a masterclass of rhetoric, from which countless men in business, law, and politics have learned to emulate Cicero’s misogyny. Trailblazing women like Clodia have always, in the historian’s shorthand, been called “ahead of their time.” But history deserves to be told from another point of view: by pointing out the parade of men who have stubbornly and perennially thwarted progress. Rome’s republic might have survived a bit longer had its own people listened to, not silenced, its women. Source of the article

GOATReads: History

7 Everyday Objects From the Shang Dynasty

People living in this Bronze Age civilization crafted unique objects that shed light on life in ancient China some 3,200 years ago. The Shang Dynasty is the earliest Chinese dynasty for which we have solid archeological evidence, including the oldest surviving examples of Chinese writing. Excavations at the ancient Shang capital of Anyang, occupied from roughly 1250 to 1050 B.C., have unearthed fascinating details of daily life in this Bronze Age civilization, from bustling bronze workshops where artisans designed and cast elaborate ceremonial vessels, to royal tombs packed with human sacrifices. Here are seven objects from the Shang Dynasty that shed light on a 3,200-year-old civilization at its peak. 1. Oracle Bones Oracle bones from Anyang, made from cow scapulas and turtle shells, contain the earliest known examples of Chinese writing. Writing in China likely predated the examples found at Anyang. Everyday writing was done on bamboo tablets, but that material doesn’t survive in the archeological record in the region of north China where Anyang is located, says Kyle Steinke, a research curator with the Smithsonian’s National Museum of Asian Art. “Oracle bones are a really important part of how Anyang [the ancient Shang capital] was discovered,” says Steinke, “and they form a very large corpus of the writing that survives.” For centuries, local farmers in Anyang dug up mysterious bone fragments in their fields inscribed with ancient characters, but they sold the bones to be ground up by apothecaries and used in Traditional Chinese Medicine. In 1899 it was discovered that these markings were actually a form of ancient Chinese writing. In the 1920s and '30s, Chinese archeologists investigating the source of these oracle bones at Anyang, discovered rammed earth foundations indicating a once-grand palace complex dating to the Bronze Age. Oracle bones were used in ancient Chinese divination rituals, explains Steinke, who helped curate Anyang: China’s Ancient City of Kings, an exhibit at the National Museum of Asian Art. “The divinations follow a formula—they identify the date and ask a particular question,” says Steinke. “The questions cover a wide range of topics: the outcome of battles, the upcoming harvest, weather predictions, but also more personal issues like the outcome of a toothache.” The oracle bones presented two outcomes, one auspicious and one inauspicious, then the diviner applied heat to crack the bone or shell. The mark left by the crack indicated whether good or bad fortune awaited. Some of these ancient oracle bones contained the names of Shang kings and consorts, including the legendary Fu Hao, the queen consort of Emperor Wu Ding, who was a feared general in her own right. One turtle-shell oracle bone (featured in an interactive 3D exhibit from the Smithsonian) recorded a divination made for the pregnant queen: On jiashen [the twenty-first day] a crack was made; Nan [the diviner] tested: “Fu Hao’s delivery will be blessed.” 2. Executioner's Blade Five years into the dig at Anyang, archeologists made a monumental discovery—an underground complex of royal tombs belonging to the Shang kings. “There were these enormous cruciform tombs with ramps down to a central rectangular chamber where a timber coffin was placed,” says Steinke. “And on the ramps descending into the tombs were rows and rows of beheaded skeletons.” Analysis of the bones reveals that these sacrificial victims had a diet different from that of the local population, which suggests that were mostly likely foreigners, probably enemy captives taken prisoner in battle. Royal tombs also included intact skeletons of Shang servants buried with their king or queen, a separate mode of human sacrifice. “When there’s so much human sacrifice, ceremonial weapons would have clearly been extremely important,” says Steinke. “They would have publicly displayed the martial prowess and status of the Shang elite.” Shang-era bronze was cast in high-temperature foundries, making some of the heaviest and highest-quality bronze in the ancient world. The Smithsonian collection includes several large, inscribed ax blades that likely served a ceremonial or ritual purpose, including as an executioner’s weapon. 3. Ritual Vessels Most of Anyang’s royal tombs were looted in antiquity, possibly by the Zhou who toppled the Shang Dynasty sometime in the middle of the 11th century. But in 1976, archeologists made another remarkable discovery at Anyang—the untouched tomb of Fu Hao, the famed female general of the late Shang period. The wealth of artifacts packed into Fu Hao’s tomb was “staggering,” says Steinke, including more than two metric tons of ritual bronze vessels, the “hallmark objects of Shang elite material culture.” These ornately decorated bronze vessels were part of ancient Chinese “banqueting” rituals in which food and drink were offered to venerated ancestors and other spirits. “In the Shang period, the two most important types of vessels were a tripod vessel for heating wine and a taller goblet, which could be used for libation offerings,” says Steinke. “These rituals were so overwhelmingly important that even in a humble tomb, you would see pottery versions of these vessels.” These heavy bronze pots and goblets were often decorated with an animal-mask motif known as a taotie. Although the exact meaning of the taotie design isn’t known, Steinke says animal eyes were often used to draw the viewer’s attention, and in Shang art they’re surrounded by geometric horns, jaws and fangs. Production of so much high-quality bronzework requires large-scale mining, state-sponsored factories and expertly trained artisans. According to Keith Wilson, curator of ancient Chinese art at the Smithsonian National Museum of Asian Art, ceramicists would have fashioned the molds that would then be used to cast the bronze. “Given the level of technical knowledge that would have been required in this almost industrial, highly organized method of production, that knowledge may well have been passed down through family lines as hereditary positions,” says Wilson. 4. Bone Hairpin In the tomb of Lady Fu Hao, among the ornate bronze bowls inscribed with her royal name, were more personal objects, including a collection of decorative hairpins carved from animal bone. “Shang art doesn’t depict humans as subject matter very much, so we don’t know that much about hair styles during the period or what people looked like in Anyang,” says Steinke, “but we know that people were buried with objects of personal adornment, including hairpins, jade pendants and textiles. Clearly, Fu Hao was buried with objects that were intended for the afterlife.” Bone carving appears to have been a big industry in Anyang, where Wilson says archeologists have found “huge amounts” of bovine bones and skeletons along with 40 tons of bone debris from a bone carving factory. “The scale of production may be related to the Shang diet, which seems to have been surprisingly beef heavy,” says Wilson. “We think of ancient people as subsisting on grass, but these people were eating steaks.” 5. Chariots Both chariots and horses first arrived in China during the Shang period around 1200 B.C., likely introduced by people living on the vast steppes of Inner Asia and Mongolia. The Shang elite quickly adopted both the horse and chariot, and their importance is reflected in elite tombs at Anyang, where dozens of chariots were buried along with their horses and drivers. The Shang-era chariots were made from wood, which rotted away over time. But incredibly, archeologists were able to painstakingly excavate the soil from around the rotted wood, leaving a three-dimensional model of the chariots made entirely of dirt. The exhumed chariots can be seen today at the Yin Xu Ruins in Anyang, a UNESCO World Heritage Site. Since chariots were new to China, it’s unlikely they were used extensively in warfare, but rather as transport for generals and other elite fighters, or for royal hunting parties. Chariots may not have been “everyday” items, but they would have been visible in public spectacles organized by the Shang kings. The Smithsonian exhibit includes some handsome bronze rein guides used by Shang charioteers. Steinke points out that decorations on some of the rein guides aren’t traditional Shang motifs, but suns and other geometric designs from ancient Chinese steppe culture, a sign that the horse drivers and handlers may have involved foreign personnel. 6. Bells Bells were everywhere in Bronze Age China, from tiny bells strung to the collars of dogs to massive bronze bells that were forerunners of a famous ancient Chinese musical instrument. Wilson believes that some of the earliest and smallest bronze bells in China were originally made for dogs and horses, to keep track of pets and property. In Anyang, these tiny bells were found in companion burials of dogs underneath the coffins of their owners. Larger hand bells from the Shang period were clapperless, meaning they were held upright and struck with a mallet. Steinke thinks they were used for signaling as well as simple musical instruments. During the Zhou Dynasty that followed the Shang, bell-making technology flourished and artisans figured out how to cast large bronze bells tuned to play multiple, precise notes. By the 5th-century B.C. Chinese courts were displaying their sophistication and power with an instrument called the bianzhong, which, in the case of one extraordinary ruler, consisted of 64 tuned bronze bells (some weighing 400 lbs) suspended from wooden frames. 7. Ancestor Tablets These exquisitely carved jade objects were recovered from tombs at Anyang and other earlier Bronze Age cities in China, but archeologists are still unsure of their exact function. Jade was one of the most prized media for jewelry and ritual objects. The jade used in Shang-era China was nephrite jade, which has a subtler green color than “imperial green” jadeite. Some of these mysterious handle-shaped objects were inscribed with the names of ancestors, so archeologists initially labeled them ancestor tablets. But Steinke and Wilson don’t believe they were used like ancestor tablets in later times, as part of home altars venerating the dead. Instead, there are clues that these ancestor tablets were used in banqueting rituals. “The best interpretation of how these Shang ‘ancestor tablets’ were used is that they were the handle portion of an implement that was placed in bronze goblets which were filled with a spiced alcoholic beverage, and used as part of some kind of libation ritual directed to the ancestors,” says Steinke. Wilson points out that with nearly all of the ancestor tablets recovered from tombs, one end of the handle is unfinished. “That’s led people to speculate that they may have been part of a larger assembly involving organic materials that have since perished,” says Wilson. “All we have is a handle part of a larger assembled object. That really adds to the mystery.” Source of the article

GOATReads: Psychology

The Psychology of Collective Abandonment

Why we choose AI over each other. There's a cognitive dissonance playing out on a planetary scale, revealing something harsh about human psychology. While corporate AI investment reached $252.3 billion in 2024 and tech giants plan to spend $364 billion in 2025 on AI infrastructure, the United Nations faces "a race to bankruptcy" with $700 million in arrears. Meanwhile, the annual funding gap to achieve basic human dignity stands at $4.2 trillion. This is absurdity at scale. What psychological mechanisms allow us to pour hundreds of billions into artificial intelligence while 600 million people will live in extreme poverty by 2030? The answer lies in the architecture of human decision-making under conditions of abstraction, proximity bias, and manufactured urgency. The Tyranny of Tangibility Human beings respond to what's immediate and concrete. A chatbot answering your questions right now feels more real than a child going hungry on another continent. This is proximity bias, our tendency to prioritize what's close over what's distant, even when the distant has greater moral weight. AI companies exploit this brilliantly. They put products in your hand, on your screen. The benefits feel immediate: efficiency, convenience, novelty. The costs—183 terawatt-hours of electricity in 2024, projected to reach 426 TWh by 2030, or 16 to 33 billion gallons of water annually by 2028—remain abstract. We don't see aquifers depleting. We don't experience the blackouts Mexican and Irish villages face after data centers arrive. Contrast this with global poverty. A mother choosing between food and medicine doesn't register in your daily experience. Schools without teachers, clinics without medicine—these remain distant, statistical, unreal. This is tangibility asymmetry: AI benefits feel real; AI costs feel abstract. The UN's Sustainable Development Goals (SDG) benefits feel abstract to those whose food, water, and shelter are guaranteed. SDG costs, and the suffering from inaction, feel unreal. Our brains struggle with this inverted relationship between psychological salience and actual importance. The Seduction of Technological Solutionism Humans prefer elegant technical solutions to messy human problems—what psychologists call technological solutionism. First, the illusion of control. Technology offers the fantasy that complex problems can be solved through engineering rather than changing behavior or confronting power structures. Developing AI seems more achievable than ending poverty because one is technical (which we can compartmentalize) while the other requires confronting inequality and uncomfortable truths about wealth distribution. Second, moral licensing. When we invest in AI framed as "solving" problems and aiding healthcare diagnosis and climate modeling, we psychologically permission ourselves to ignore how those investments exacerbate other problems. "We're working on the future" justifies abandoning the present. Executives approving billions for AI infrastructure tell themselves they're contributing to progress, even as that infrastructure drains resources from communities in desperate need of water and electricity. Third, future discounting—valuing near-term gains over long-term consequences. AI promises returns next quarter. The 2030 SDG deadline feels distant, even though we're just six years away. This gap makes AI feel urgent and SDGs optional. The Diffusion of Responsibility Perhaps most powerful is diffusion of responsibility, the bystander effect scaled to planetary proportions. When everyone is responsible, no one feels accountable. Consider the tech decision-maker allocating billions to AI. They're not choosing between "AI investment" and "ending child hunger." They're choosing between "AI investment that competitors are making" and "not investing." The counterfactual—what could be done with those billions—never enters their decision space. SDG funding responsibility is so diffused across humanity that it belongs to no one. This is reinforced by system justification, defending existing systems: "This is how markets work." "Capital flows to opportunities." Each statement is defensible alone, but collectively they create a psychological fortress protecting the status quo from moral scrutiny. Meanwhile, global financial wealth reached $305 trillion in 2024. The money exists. But diffusion of responsibility means no individual, institution, or nation feels obligated to mobilize even a fraction—$4.2 trillion annually—to ensure that every human has food, water, shelter, healthcare, and education. FOMO as Moral Anesthetic The AI investment frenzy exhibits classic bubble psychology: fear of missing out (FOMO) overriding rational assessment. When AI startups raised $110 billion in 2024, up 62%, and markets lose $800 billion in a day on news of a cheaper competitor, we're witnessing panic over principle. FOMO hijacks our social comparison mechanisms. We evaluate investments relative to what others are doing, not against absolute measures of value or social good. If your competitor invests in AI, you must too, regardless of whether it creates genuine value or inflates valuations. This creates a trap in which the more irrational the investment, the more urgent it feels. Moral considerations, like human costs of capital misallocation, become irrelevant under competitive panic. The Path Forward: ProSocial AI Breaking these patterns requires restructuring decision-making. ProSocial AI is psychologically essential. Rather than asking "What can AI do?" we must ask "What should AI do to enhance human dignity and planetary health?" This reframing activates different mechanisms. Instead of technological solutionism, it invokes moral reasoning. Instead of proximity bias, it demands perspective-taking, imagining those bearing the costs. Instead of diffusion of responsibility, it creates direct accountability by linking AI development to specific human outcomes. Hybrid intelligence—complementarity of artificial and natural intelligence—recognizes that critical decisions require human judgment, empathy, and ethical reasoning that AI cannot replicate. When local communities affected by data centers have voice in deployment decisions, proximity bias works for moral outcomes. When AI development is evaluated against SDG achievement rather than quarterly returns, future discounting is countered by present-focused accountability. This demands human agency amid AI, maintaining human decision-making power. Every algorithm involves human choices about whose interests matter. Democratizing those choices, particularly including voices from the Global South who are bearing climate and poverty costs, counters diffusion of responsibility and system justification. Psychological Leverage Points Shareholder activism during proxy season transforms diffusion of responsibility into direct accountability. Voting for resolutions requiring AI environmental impact reporting or tying executive pay to sustainability metrics makes invisible costs visible, countering tangibility asymmetry. Institutional divestment advocacy at universities, pension funds, or religious organizations activates social proof, as we look to others to determine appropriate behavior. When institutions publicly shift from extractive AI to regenerative technology, they signal new norms. Narrative reframing is most powerful. When you mention that each ChatGPT conversation uses water equivalent to a plastic bottle, you make abstract costs tangible. When you ask what problems AI solves versus exacerbates, you activate critical thinking countering technological solutionism. When you reframe "inevitable AI future" as "choosing AI's role in a human future," you restore agency where determinism created learned helplessness. The tragedy of our moment is choosing AI progress abstraction over human suffering reality. The opportunity is that psychology works both ways: The same mechanisms trapping us can, when restructured, guide us toward choices honoring human dignity and planetary health. The answer lies not in algorithms, but in recognizing our shared humanity, and acting accordingly. Source of the article