CAT 26 and IPMAT 26 online and offline [Indore centre] batches available. For details call - 9301330748.

Published on Nov 21, 2025
GOATReads: Psychology
The Psychology of Collective Abandonment
The Psychology of Collective Abandonment

Why we choose AI over each other.

There's a cognitive dissonance playing out on a planetary scale, revealing something harsh about human psychology. While corporate AI investment reached $252.3 billion in 2024 and tech giants plan to spend $364 billion in 2025 on AI infrastructure, the United Nations faces "a race to bankruptcy" with $700 million in arrears. Meanwhile, the annual funding gap to achieve basic human dignity stands at $4.2 trillion. This is absurdity at scale.

What psychological mechanisms allow us to pour hundreds of billions into artificial intelligence while 600 million people will live in extreme poverty by 2030? The answer lies in the architecture of human decision-making under conditions of abstraction, proximity bias, and manufactured urgency.

The Tyranny of Tangibility

Human beings respond to what's immediate and concrete. A chatbot answering your questions right now feels more real than a child going hungry on another continent. This is proximity bias, our tendency to prioritize what's close over what's distant, even when the distant has greater moral weight.

AI companies exploit this brilliantly. They put products in your hand, on your screen. The benefits feel immediate: efficiency, convenience, novelty. The costs—183 terawatt-hours of electricity in 2024, projected to reach 426 TWh by 2030, or 16 to 33 billion gallons of water annually by 2028—remain abstract. We don't see aquifers depleting. We don't experience the blackouts Mexican and Irish villages face after data centers arrive.

Contrast this with global poverty. A mother choosing between food and medicine doesn't register in your daily experience. Schools without teachers, clinics without medicine—these remain distant, statistical, unreal.

This is tangibility asymmetry: AI benefits feel real; AI costs feel abstract. The UN's Sustainable Development Goals (SDG) benefits feel abstract to those whose food, water, and shelter are guaranteed. SDG costs, and the suffering from inaction, feel unreal. Our brains struggle with this inverted relationship between psychological salience and actual importance.

The Seduction of Technological Solutionism

Humans prefer elegant technical solutions to messy human problems—what psychologists call technological solutionism.

First, the illusion of control. Technology offers the fantasy that complex problems can be solved through engineering rather than changing behavior or confronting power structures. Developing AI seems more achievable than ending poverty because one is technical (which we can compartmentalize) while the other requires confronting inequality and uncomfortable truths about wealth distribution.

Second, moral licensing. When we invest in AI framed as "solving" problems and aiding healthcare diagnosis and climate modeling, we psychologically permission ourselves to ignore how those investments exacerbate other problems. "We're working on the future" justifies abandoning the present. Executives approving billions for AI infrastructure tell themselves they're contributing to progress, even as that infrastructure drains resources from communities in desperate need of water and electricity.

Third, future discounting—valuing near-term gains over long-term consequences. AI promises returns next quarter. The 2030 SDG deadline feels distant, even though we're just six years away. This gap makes AI feel urgent and SDGs optional.

The Diffusion of Responsibility

Perhaps most powerful is diffusion of responsibility, the bystander effect scaled to planetary proportions. When everyone is responsible, no one feels accountable.

Consider the tech decision-maker allocating billions to AI. They're not choosing between "AI investment" and "ending child hunger." They're choosing between "AI investment that competitors are making" and "not investing." The counterfactual—what could be done with those billions—never enters their decision space. SDG funding responsibility is so diffused across humanity that it belongs to no one.

This is reinforced by system justification, defending existing systems: "This is how markets work." "Capital flows to opportunities." Each statement is defensible alone, but collectively they create a psychological fortress protecting the status quo from moral scrutiny.

Meanwhile, global financial wealth reached $305 trillion in 2024. The money exists. But diffusion of responsibility means no individual, institution, or nation feels obligated to mobilize even a fraction—$4.2 trillion annually—to ensure that every human has food, water, shelter, healthcare, and education.

FOMO as Moral Anesthetic

The AI investment frenzy exhibits classic bubble psychology: fear of missing out (FOMO) overriding rational assessment. When AI startups raised $110 billion in 2024, up 62%, and markets lose $800 billion in a day on news of a cheaper competitor, we're witnessing panic over principle.

FOMO hijacks our social comparison mechanisms. We evaluate investments relative to what others are doing, not against absolute measures of value or social good. If your competitor invests in AI, you must too, regardless of whether it creates genuine value or inflates valuations.

This creates a trap in which the more irrational the investment, the more urgent it feels. Moral considerations, like human costs of capital misallocation, become irrelevant under competitive panic.

The Path Forward: ProSocial AI

Breaking these patterns requires restructuring decision-making. ProSocial AI is psychologically essential. Rather than asking "What can AI do?" we must ask "What should AI do to enhance human dignity and planetary health?"

This reframing activates different mechanisms. Instead of technological solutionism, it invokes moral reasoning. Instead of proximity bias, it demands perspective-taking, imagining those bearing the costs. Instead of diffusion of responsibility, it creates direct accountability by linking AI development to specific human outcomes.

Hybrid intelligence—complementarity of artificial and natural intelligence—recognizes that critical decisions require human judgment, empathy, and ethical reasoning that AI cannot replicate. When local communities affected by data centers have voice in deployment decisions, proximity bias works for moral outcomes. When AI development is evaluated against SDG achievement rather than quarterly returns, future discounting is countered by present-focused accountability.

This demands human agency amid AI, maintaining human decision-making power. Every algorithm involves human choices about whose interests matter. Democratizing those choices, particularly including voices from the Global South who are bearing climate and poverty costs, counters diffusion of responsibility and system justification.

Psychological Leverage Points

Shareholder activism during proxy season transforms diffusion of responsibility into direct accountability. Voting for resolutions requiring AI environmental impact reporting or tying executive pay to sustainability metrics makes invisible costs visible, countering tangibility asymmetry.

Institutional divestment advocacy at universities, pension funds, or religious organizations activates social proof, as we look to others to determine appropriate behavior. When institutions publicly shift from extractive AI to regenerative technology, they signal new norms.

Narrative reframing is most powerful. When you mention that each ChatGPT conversation uses water equivalent to a plastic bottle, you make abstract costs tangible. When you ask what problems AI solves versus exacerbates, you activate critical thinking countering technological solutionism. When you reframe "inevitable AI future" as "choosing AI's role in a human future," you restore agency where determinism created learned helplessness.

The tragedy of our moment is choosing AI progress abstraction over human suffering reality. The opportunity is that psychology works both ways: The same mechanisms trapping us can, when restructured, guide us toward choices honoring human dignity and planetary health.

The answer lies not in algorithms, but in recognizing our shared humanity, and acting accordingly.

Source of the article