The backlash against AI is often framed as a series of specific disputes: AI music, AI art, AI writing, AI in schools, AI in offices. But if you look at the tone, the language, and the demands being made, it becomes clear that this is not really a set of narrow debates about specific uses.
It is a broad anti-AI movement.
Not “regulate AI.”
Not “govern AI.”
Not “make AI pay its way.”
But stop AI.
Abolish it. Ban it. Roll it back. Treat it as a moral mistake that should never have existed.
That impulse is understandable. Big technological shifts always generate fear, especially when they arrive fast, unevenly, and under the control of a small number of powerful corporations. But as a strategy, abolition is misguided. It does not address the real forces at work. And it does not offer solutions that could actually succeed.
AI is not going away. The capital has been committed. The infrastructure is being built. The capabilities will continue to improve.
The real question is not whether AI should exist.
The real question is: who pays for it, who controls it, and who benefits from it?
AI Is Not Just Software. It Is Industry.
We talk about AI as if it lives in “the cloud,” but the cloud is just a polite word for factories.
AI runs on data centers: enormous, energy-hungry, water-hungry industrial facilities filled with servers, cooling systems, substations, and fiber lines. These are being built at extraordinary speed across the country and the world.
And just like every industrial build-out before it, they are usually placed where land is cheap, power is cheap, regulation is light, and communities have the least power to resist.
Some are being built in places that make no long-term sense at all — including deserts, where water is not just scarce, but irreplaceable.
This is not a cultural problem. It is not a software problem. It is not a philosophical problem.
It is an industrial governance problem.
Right now, AI looks cheap because we are not charging it for what it actually costs.
The Wrong Fight
Much of the anti-AI movement is focused on uses of AI rather than on the structure of the industry.
They protest AI music.
They protest AI images.
They protest AI in classrooms.
They protest AI in offices.
And very often, the demand is not “do this better,” but “don’t do this at all.”
But banning tools does not regulate infrastructure. And moral arguments do not build water policy, energy policy, zoning law, or tax systems.
Trying to abolish AI is like trying to abolish factories in the 19th century or electricity in the 20th. It misunderstands what kind of thing this is.
AI is not a gadget.
It is a general-purpose industrial capability.
The choice is not whether we have it.
The choice is whether we govern it — or let it govern us.
Don’t Tax Learning. Tax Impact.
There is a crucial distinction that gets lost in almost every AI debate: learning is not the problem.
Training should be free. Experimentation should be free. Exploration should be free. The act of building intelligence — human or machine — is not what threatens society.
The problem is the industrial footprint.
AI companies should not be allowed to externalize the real costs of their business.
They should pay for:
• The energy they consume
• The water they consume
• The strain they put on infrastructure
• The communities they disrupt
• The environmental risks they create
• The economic disruption they cause
If a company needs a data center, it should not be allowed to quietly drain a community’s water table. It should not be allowed to build in a desert where water cannot be replaced. It should not be allowed to overload power grids without paying to upgrade them. It should not be allowed to extract enormous value while leaving everyone else with the bill.
Right now, AI looks cheap because we are subsidizing it without admitting we are doing so.
If AI companies were forced to pay the true social, environmental, and infrastructural costs of their operations, AI would become more expensive.
And that would be healthy.
It would force the industry to:
• Build more efficient systems
• Choose locations responsibly
• Invest in energy and water infrastructure
• Slow down reckless scaling
• Optimize for quality instead of brute-force compute
• Use AI where it actually helps instead of everywhere
That is not anti-innovation.
That is how every serious industry is supposed to work.
AI as a Human Good
AI has the potential to be a public good: a tool that amplifies learning, creativity, accessibility, science, medicine, and problem-solving. Used well, it can help us design better systems, model climate outcomes, improve education, reduce waste, accelerate research, and lower the barrier to creation and understanding. But that only happens if we separate the idea of intelligence from the business model of extraction.
Right now, we are repeating a very old mistake: privatizing the benefits while socializing the costs. If AI is going to reshape work and society, then AI companies should be paying into:
• Worker transition and retraining
• Community infrastructure
• Energy and water resilience
• Public research
• Education systems
• Cultural ecosystems
Not as charity. As the cost of doing business.
Why Government Has to Step In
Markets will not solve this. They never have.
Left alone, companies will build where it is cheapest, extract until something breaks, and call it “efficiency.” They will call it “innovation.” They will call it “inevitable.” That is why industrial societies invented:
• Zoning laws
• Environmental regulation
• Infrastructure planning
• Utility oversight
• Industrial taxation
We need the same level of seriousness for AI. Not to stop it. To shape it.
The Real Choice
The real danger is not that AI will make music, images, or text. The real danger is that we let a planetary-scale industrial system grow without ever forcing it to account for its true costs.
The anti-AI movement is right about one thing: this transition is dangerous. But it is wrong about the solution. We do not need abolition. We need governance.
We should not tax learning. We should tax impact. If we do that, AI does not become something that happens to us. It becomes something we use deliberately.
And that is the difference between a future driven by fear and a future built with intention.





Leave a Reply