From Rage to Power, Part 2: AI Power & Vulnerabilities Map
Where to Push and How to Win
This is the second in a series based on my summer 2025 project for the AI Safety, Ethics & Society course. In the first article, I discussed how to leverage your frustrations and asked you to pick one specific frustration about AI and write one sentence about it—because personal anger is fuel, not a weakness. This week, we’re mapping where your frustration connects to the AI industry’s vulnerabilities.

You’ve picked your frustration. You wrote it down. Maybe it’s about stolen creative work, or wrongful arrests, or the water your community needs going to data centers. Whatever it is, you’re probably thinking: “Okay, but what can I actually do about billion-dollar companies backed by trillion-dollar investors?”
Your anger is legitimate. Now let’s turn it into leverage.
Here’s what might surprise you: The AI industry isn’t as invincible as it looks.
Yes, they have billions in funding. Yes, they have armies of lobbyists. Yes, they control the technology shaping our future. But they also depend on people—cobalt miners in the DRC who could organize, chip factories in Taiwan vulnerable to export controls, content moderators in Kenya who are already organizing, electricity grids in Arizona that communities control, and engineers who can walk out. Every single one of those dependencies is a pressure point.
The key isn’t fighting the whole system at once. It’s finding where your frustration intersects with their vulnerabilities—and pushing there.
You need to understand two things: who has power in AI development, and how the AI supply chain works.
I. Who Has Power (And Their Weak Spots)
The AI industry wants you to think it’s a monolith. It’s not. There are five major stakeholder groups, and each one has vulnerabilities you can exploit.
A. The Tech Giants (OpenAI, Google, Microsoft, Meta, Amazon)
These companies control the models, the deployment platforms, the massive computing power, and much of the user data that makes AI possible. They look unstoppable.
But they have weak points.
Reputation matters to them—especially the consumer-facing companies. They spend millions on brand image, and they’re terrified when it cracks. Their workers can organize, and do. Engineers, content moderators, and contractors have the power to leak, refuse unethical work, and shut down projects. They’re still subject to regulation, even if they’ve captured many regulators. Antitrust scrutiny is growing, and public backlash can destroy user adoption and investor confidence.
Case in point: Google Project Maven in 2018. Thousands of Google employees protested when they found out their work was being used for military AI contracts. About a dozen publicly resigned. Google caved because workers organized. The company didn’t renew the contract and published AI Principles pledging not to develop AI for weapons. Months later, over 20,000 Google employees walked out over other issues, winning policy changes on forced arbitration.
Worker power terrifies them.
B. The Investors (VCs, Hedge Funds, Pension Funds, Institutional Investors)
The money people. They control funding that determines what gets built, hold board seats that set strategic direction, and push companies to prioritize growth over safety.
But they’re allergic to risk. Reputational scandals threaten valuations and scare their limited partners—the pension funds and university endowments whose money funds these ventures. Regulatory threats disrupt the growth story they’ve promised investors. And many investors have made ESG commitments (environmental, social, and governance pledges) they can be held to. Public companies must respond to shareholder resolutions.
Investors have escalated pressure through shareholder proposals and votes—sometimes winning, often forcing disclosure or board engagement. Meta’s civil rights audit came after sustained civil-society and investor pressure. Microsoft’s ethics committee predates recent proposals, but investor scrutiny has broadened its reporting on AI risk. These weren’t voluntary gestures—they were responses to organized pressure that threatened corporate reputations and regulatory exposure.
Investors respond to risk faster than they respond to ethics. When enough shareholders demand accountability, companies comply.
C. The Workers (Engineers, Content Moderators, Data Labelers, Contractors)
Power gets interesting here. Workers have inside knowledge of how systems work. They can refuse unethical work. They can leak, whistleblow, and organize. They provide the labor that makes AI function.
Their vulnerabilities? Retaliation and job loss. NDAs and non-competes. Visa status for immigrant workers. Isolation from each other.
But organized workers can shut down projects. They can expose unethical work. They have moral authority. And despite company claims, they can’t easily be replaced.
Google engineers stopped Project Maven. Content moderators from nine countries—including Kenya, Poland, Tunisia, and the Philippines—formed the Global Trade Union Alliance of Content Moderators in April 2025—demanding living wages, mental health support, and union representation. (I’ll tell you more about this organizing in the supply chain section.) Daniel Motaung, a former content moderator in Kenya who was fired for trying to organize, helped catalyze this movement. Hollywood writers won AI protections through their 2023 strike.
These companies depend entirely on human labor—both the engineers writing code and the content moderators earning poverty wages to view horrific content.
D. The Regulators (Government Agencies, Legislators, State/Local Officials)
Government actors control the ability to write and enforce rules, wield antitrust authority, set export controls, create procurement standards for government AI use, and approve local zoning and permits for data centers.
They’re often underfunded and understaffed. Many are captured by the industry. They struggle with the technical complexity. They face intense political pressure from industry lobbyists.
Regulators respond to organized public pressure—especially locally, where your voice matters more. Yes, many are captured by industry. That’s exactly why you need to be louder than their lobbyists.
NYC Local Law 144, passed in 2023, requires companies using AI for hiring to conduct annual bias audits and notify candidates. It passed because labor advocates, civil rights organizations, and tech accountability groups pushed together. Dozens of U.S. cities and several states have banned or restricted government use of facial recognition, with policies gaining momentum nationwide.
Internationally, the EU AI Act—passed in 2024—created the world’s first comprehensive regulatory framework for AI, banning certain high-risk applications and requiring transparency for others. Civil society organizations across Europe organized for years, building coalitions between privacy advocates, labor unions, and civil rights groups. They created the political pressure that made regulation possible.
E. The Users (All of Us)
We control our data (in theory), our voices and votes, our willingness to use products, and our ability to organize and create public pressure.
Individually, we’re almost powerless. One person deleting an app doesn’t matter. We face manipulation and surveillance. We often lack real alternatives. We’re atomized and isolated.
But collectively? Numbers create political pressure. Coordinated campaigns force change. Public outrage threatens both reputation and regulation.
The Writers Guild of America won AI protections in their 2023 contract because they organized and struck. Hollywood writers made it clear they wouldn’t accept AI undercutting their wages and craft—and they won.
You don’t have power alone. But when your frustration connects you to others, when individual anger becomes organized pressure, things change.
And here’s what makes this powerful: when workers organize AND shareholders demand oversight AND regulators face public pressure, companies can’t play groups against each other. The power multiplies.
II. How the AI Supply Chain Works (And Where It’s Vulnerable)
AI doesn’t magically appear. It requires a complex physical and social supply chain. Each step has weak points where pressure works.
You don’t need to master every technical detail. You just need to know where the system is vulnerable—and where your frustration connects to those vulnerabilities.
Step 1: Raw Materials and Energy
AI doesn’t run on code alone. It requires a massive physical infrastructure—from mining operations to power plants to data centers. Each piece of this infrastructure creates pressure points where communities have leverage.
The Mining Operations
AI infrastructure requires raw materials extracted from the ground. Cobalt for batteries comes primarily from the Democratic Republic of Congo, where children as young as seven work in dangerous mines for less than $2 a day. Rare earth minerals essential for electronics come from China, often with devastating environmental costs. Lithium for the batteries that power backup systems comes from South America.
These aren’t abstract supply chains. They’re actual places where actual people have organized to fight back. In Chile, Indigenous communities near lithium mining operations have successfully stopped or delayed projects through organized resistance and legal challenges. They control the land. Mining companies need their cooperation—or at least their inability to resist. When communities organize, projects stall.
The vulnerability: Mining requires local permits, community acceptance, and stable supply chains. Disruption at this level ripples up through everything else.
The Energy Infrastructure
As I wrote about in depth, AI’s energy demands are staggering. According to one study, training GPT-3 alone required about 1,300 megawatt-hours of electricity. Data centers in the U.S. are expected to reach 6% of total electricity consumption by 2026, growing about 12% every year through 2030.
This creates three pressure points:
Grid access: When a company wants to connect a new data center to the power grid, they join an interconnection queue—basically a waitlist. In places like Northern Virginia, that wait is now 4-7 years. Companies can’t just pay more to skip the line. The grid wasn’t built for this demand, and upgrading it takes time, permits, and community cooperation.
Power generation: Because clean energy can’t be built fast enough, AI is driving a fossil fuel comeback. Coal plants that were scheduled to shut down are staying open. Natural gas plants are being built to meet data center demand. This creates regulatory vulnerability—environmental groups can challenge permits, communities can fight air quality impacts, and climate advocates can demand clean energy requirements.
Energy costs: Residential electricity bills in Northern Virginia are expected to increase by $14-37 per month by 2040 due to data center growth. When communities realize they’re subsidizing Google’s infrastructure through higher electric bills, they get angry. That anger creates political pressure on utilities and local officials.
The vulnerability: Companies need electricity that someone else controls—utilities, grid operators, local governments. Those entities respond to public pressure.
The Data Centers Themselves
Data centers are where the energy meets the computation. A single large facility can consume up to 5 million gallons of water per day for cooling—roughly what a town of 50,000 people uses. Training a single AI model can require over 700,000 liters of water.
Data centers need land, water rights, energy connections, and permits. Communities control all of these through:
Zoning and permits: Cities can require special permits for data centers, set limits on energy and water use, or simply deny applications. Phoenix passed regulations in 2024 and 2025 requiring data centers to meet strict health and safety standards after hundreds of pages of public comments pushed back on unchecked growth.
Water rights: In drought-stressed regions like Arizona—which experienced drought across 99% of its territory in recent years—water isn’t unlimited. Tucson passed strict “mega-user” water rules requiring conservation plans after residents organized against a proposed data center during historic drought conditions. When your aquifer is running dry, watching millions of gallons per day cool computer chips feels like theft.
Environmental review: Large projects often require environmental impact assessments. These create opportunities for communities to demand transparency, challenge findings, and delay or stop construction through legal action.
The vulnerability: Unlike cloud computing, you can’t build a data center anywhere. You need specific infrastructure in specific places. And communities control access to those places.
What This Means for Leverage
Every layer of this infrastructure stack—mining, energy, data centers—represents a different pressure point:
In 2022, Meta abandoned plans for a multi-billion-euro data center in the Netherlands after local governments and communities organized to block it over environmental concerns. Ireland paused new data center connections near Dublin until 2028 to manage grid stability. A Michigan township tried to block a 250-acre facility and eventually won restrictions after community outcry and legal challenges.
These victories happened because communities understood their leverage: companies need land, water, energy, and permits. All of those require local cooperation. When communities organize to withhold that cooperation, billion-dollar projects stop.
The companies building AI infrastructure want you to think this is inevitable. But communities have blocked data centers. Environmental groups have forced fossil fuel plants offline. Indigenous peoples have stopped mining operations. The infrastructure isn’t inevitable—it’s contested. And right now, companies are losing some of those contests.
You don’t need to understand mining engineering or electrical grid design. You just need to know: your community controls resources that AI companies desperately need. That’s leverage.
Step 2: Hardware Manufacturing
About 80% of the world’s advanced AI chips come from one company: NVIDIA. This concentration creates vulnerability. When one company controls critical infrastructure, they become a target for antitrust action, export controls, and political pressure.
Export controls are already being used as leverage.1 In 2022, the U.S. government restricted exports of advanced chips to China—not just abstract geopolitics, but about whether authoritarian governments get access to surveillance tools that can track and control their populations. These restrictions work because the semiconductor supply chain is so concentrated.
But concentration also means monopoly power that can cause harm. NVIDIA’s dominance lets them set prices, control access, and shape what AI systems get built. Breaking up tech monopolies through antitrust action is gaining momentum in the U.S. and Europe.
Step 3: Data Collection
AI needs data. Mountains of it. Companies scrape the internet without asking permission. They hire “ghost workers” in Kenya and the Philippines who earn $2/hour labeling images. They employ content moderators to view traumatic material so AI systems can learn to recognize harmful content.
Worker organizing meets data extraction at this pressure point.
GDPR in Europe has proven that data privacy laws have teeth. Clearview AI—a company that scraped billions of photos from social media without consent to build a facial recognition database used by law enforcement—was fined over $90 million across multiple European countries for illegal scraping. The company fought back but ultimately had to stop operating in several jurisdictions. Legal action against unauthorized scraping works because companies fear regulatory penalties and litigation costs.
But the bigger story is worker organizing. In April 2025, content moderators from nine countries launched the Global Trade Union Alliance of Content Moderators in Nairobi, bringing together workers from Kenya, Ghana, Turkey, Poland, Colombia, Portugal, Morocco, Tunisia, and the Philippines. More countries are joining.
Michał Szmagaj, a former Meta moderator now organizing workers in Poland, described what this work actually looks like:
The pressure to review thousands of horrific videos each day—beheadings, child abuse, torture—takes a devastating toll on our mental health, but it’s not the only source of strain. Precarious contracts and constant surveillance at work add more stress.
These workers are organizing globally, demanding living wages, mental health support, stable contracts, and union representation. Companies need this data labeled. If workers refuse, the pipeline breaks.
Step 4: Model Training
Training a single AI model can produce 283 tons of CO2—roughly the lifetime emissions of five cars. These models run on huge computing resources for weeks or months, consuming massive amounts of energy.
Communities control that energy—through public utility boards, local government, and energy regulations.
This pressure works. Major AI conferences now encourage researchers to report the energy usage and carbon footprint of training models. This didn’t happen voluntarily—environmental advocates pushed for transparency, then used that transparency to demand accountability. The EU AI Act now requires providers of general-purpose AI models to document energy consumption, while California’s SB 253 mandates that companies with over $1 billion in revenue report their greenhouse gas emissions starting in 2026. California has also used its Environmental Quality Act to conduct environmental impact assessments of data centers, though these remain rare.
The same pressure that forced disclosure can force reduction. Small transparency wins create momentum for bigger accountability measures.
Step 5: Deployment
AI moves from lab to life at this stage—and often it happens without testing for safety, bias, or unintended consequences.
You can fight this. Require testing before deployment. Mandate bias audits. Create liability for algorithmic harm.
NYC’s Local Law 144 requires bias audits for AI hiring tools. Around 21 cities have banned or restricted government use of facial recognition, and 15 states have various restrictions on law enforcement use. These didn’t happen because governments spontaneously decided to act. Civil rights organizations, labor advocates, tech accountability groups, and community organizers pushed together.
People like Robert Williams—wrongfully arrested in Detroit in 2020 because facial recognition misidentified him—told their stories publicly and demanded change. Williams spent 30 hours in jail for a crime he didn’t commit. He sued, and his case helped catalyze the movement to restrict facial recognition technology. Williams reached a landmark settlement in June 2024 that created what experts call the nation’s strongest police department policies on facial recognition, requiring Detroit police to implement strict safeguards and prohibiting arrests based solely on facial recognition results.
The facial recognition ban movement has slowed—five municipal bans passed in 2021, but none in 2022 or 2023. Some cities have reversed their bans: New Orleans overturned its ban in July 2022, and Virginia reversed restrictions in 2022 to allow police use in certain situations. Why? Because sustained pressure is hard. Because advocates got tired or moved on. Because industry lobbying intensified.
Pressure works. But only if you keep pushing.
Step 6: Use and Impact
Abstract “AI development” becomes concrete here. A cop shows up at your door to arrest you for a crime you didn’t commit. An algorithm rejects your resume. Your landlord uses facial recognition to track who enters the building.
People experience real consequences. They get wrongfully arrested. They lose job opportunities. They’re denied housing. And often they can’t prove they were harmed by an algorithm because the system is opaque.
The vulnerability is accountability. Can people prove they were harmed? Can they sue? Can they demand explanation?
Right now, mostly no. But that’s changing because people are organizing to make it change.
“Right to explanation” laws would require companies to explain algorithmic decisions that affect people’s lives—why you were denied a loan, why you didn’t get the interview, why the algorithm flagged you. “Mandatory human review” means a real person must check high-stakes algorithmic decisions before they’re implemented—a human has to verify before you’re arrested or fired or denied housing. These aren’t hypothetical—the EU’s GDPR includes provisions for automated decision-making under Article 22, which limits solely automated decisions and requires safeguards including meaningful information about the logic involved and the right to human intervention. In the U.S., similar proposals are being considered: the Algorithmic Accountability Act would require impact assessments for automated decision systems, and various states are proposing legislation on algorithmic transparency and human review.
Victims of AI bias have won legal cases establishing that algorithmic discrimination is illegal discrimination. In 2023, Derek Mobley filed a lawsuit against Workday—an HR software company—for algorithmic discrimination in hiring. In May 2025, a federal judge allowed the case to proceed as a collective action, potentially reaching millions of applicants over 40 who were screened by Workday’s AI. The EEOC supported the plaintiff, emphasizing that algorithmic hiring tools can violate anti-discrimination laws. The case creates precedent and liability risk that makes companies nervous, even though it’s still ongoing. These lawsuits create the fear that drives policy change.
Your frustration likely lives here—in the impact. And you have power here, because companies fear being sued and regulated more than they fear bad publicity.
III. Connecting Your Frustration to the Supply Chain
Remember that one sentence you wrote in Article 1? Here’s how it maps to where you can push.
If you’re frustrated by job losses or worker exploitation, target Steps 3, 5, and 6—Data Collection, Deployment, and Use. Support labor organizing. Back content moderator unions. Demand worker protections before AI deployment. Fight for right to human review.
If you’re angry about environmental damage, target Steps 1 and 4—Raw Materials and Training. Join or support climate groups targeting data centers. Demand transparency on energy and water use. Fight for clean energy requirements. Organize to block harmful data center construction. I wrote more about why communities are pushing back here.
If you’re worried about bias and discrimination, target Steps 3, 5, and 6—Data Collection, Deployment, and Use. Demand diverse training data. Require bias audits before deployment. Support discrimination lawsuits. Advocate for right to explanation and human review.
Concerned about privacy and surveillance? Target Step 3—Data Collection. Support data privacy laws. Fight scraping through legal action. Demand consent requirements. Advocate for opt-in rather than opt-out.
Care about exploited workers in the Global South? Target Step 3—Data Collection. Support content moderator organizing. Demand fair pay and mental health support for data labelers. Fight for global labor standards in tech supply chains.
Want to limit corporate power concentration? Target Step 2—Hardware. Support antitrust action against NVIDIA and other monopolies. Advocate for chip export controls as a human rights tool. Push for public infrastructure investment rather than just private subsidies.
If you’re worried about AI safety and catastrophic risks, target Steps 4 and 5—Training and Deployment. Demand safety testing before deployment. Support research transparency requirements. Advocate for strict liability for algorithmic harms.
If you’re frustrated by lack of accountability, target Steps 5 and 6—Deployment and Use. Fight for transparency mandates. Support right-to-explanation laws. Create pathways to sue for algorithmic harm. Protect whistleblowers who expose wrongdoing.
You don’t need to work on all of these. You just need to find your piece—the place where what makes you angry meets a real pressure point—and start pushing there.
Another way to approach this is to identify the pressure points where you have the most leverage and/or represent the weakest link. In exchange for pressure on these points, create demands that fit your goals and frustrations. While not necessarily directly related to your area of interest, this can be a powerful tactic for getting concessions.
The chain is only as strong as its weakest link. And right now, there are many weak links.
IV. The Core Insight
The AI industry doesn’t want you to know: They’re not invincible. They’re dependent.
Every step in that supply chain is vulnerable to organized pressure: the mines where children extract cobalt, the moderators viewing traumatic content, the engineers who can walk out, the communities that control land and energy.
Every dependency is a point of leverage.
You picked your issue in Article 1. Now you know where it connects to the system’s vulnerabilities. That’s power.
Remember what you’ve already seen in this article: workers stopped Project Maven, content moderators formed a global union, cities restricted facial recognition, NYC mandated hiring audits, Hollywood writers won protections, Meta suspended a massive data center in the Netherlands after community organizing and political pressure, the EU passed comprehensive AI regulation. Every one of these victories started with people who were frustrated, found others who cared, and refused to give up.
These victories took time. They took sustained effort. Some momentum has slowed. Some battles are still being fought. Being honest about that doesn’t diminish the wins—it shows you what it takes to create change.
Since early 2025, the Trump administration has mixed tightening with selective easing: it lifted export-license requirements on chip-design (EDA) software to China in July 2025 and has granted licenses for some Nvidia H20 sales, even as it revoked TSMC’s fast-track tool-export waiver for its Nanjing fab and floated broad new software-based export curbs in response to China’s rare-earth moves.
This selective easing and tightening makes the chip chain more volatile—Chinese designers can keep taping-out and buy some accelerators, but equipment and software bottlenecks inject delays and higher compliance costs that stretch lead times and re-route capacity.
For power, that stop-go policy translates into harder-to-predict AI compute buildouts—and therefore gigawatt-scale load swings—exactly the uncertainty utilities and planners are already flagging as they try to forecast AI-driven demand.









This is a comprehensive guide for how to channel your AI frustrations. Thank you, Natalia!
Spot on. This 'pressure points' angle is super insightful. How do you see regular people, or like, us local councils, identifying these exact vulnerabilities for effective action? This is tricky.