Tag: AI

  • The AI Backlash Is Aiming at the Wrong Target

    The AI Backlash Is Aiming at the Wrong Target

    The backlash against AI is often framed as a series of specific disputes: AI music, AI art, AI writing, AI in schools, AI in offices. But if you look at the tone, the language, and the demands being made, it becomes clear that this is not really a set of narrow debates about specific uses.

    It is a broad anti-AI movement.

    Not “regulate AI.”
    Not “govern AI.”
    Not “make AI pay its way.”

    But stop AI.

    Abolish it. Ban it. Roll it back. Treat it as a moral mistake that should never have existed.

    That impulse is understandable. Big technological shifts always generate fear, especially when they arrive fast, unevenly, and under the control of a small number of powerful corporations. But as a strategy, abolition is misguided. It does not address the real forces at work. And it does not offer solutions that could actually succeed.

    AI is not going away. The capital has been committed. The infrastructure is being built. The capabilities will continue to improve.

    The real question is not whether AI should exist.

    The real question is: who pays for it, who controls it, and who benefits from it?

    AI Is Not Just Software. It Is Industry.

    We talk about AI as if it lives in “the cloud,” but the cloud is just a polite word for factories.

    AI runs on data centers: enormous, energy-hungry, water-hungry industrial facilities filled with servers, cooling systems, substations, and fiber lines. These are being built at extraordinary speed across the country and the world.

    And just like every industrial build-out before it, they are usually placed where land is cheap, power is cheap, regulation is light, and communities have the least power to resist.

    Some are being built in places that make no long-term sense at all — including deserts, where water is not just scarce, but irreplaceable.

    This is not a cultural problem. It is not a software problem. It is not a philosophical problem.

    It is an industrial governance problem.

    Right now, AI looks cheap because we are not charging it for what it actually costs.

    The Wrong Fight

    Much of the anti-AI movement is focused on uses of AI rather than on the structure of the industry.

    They protest AI music.
    They protest AI images.
    They protest AI in classrooms.
    They protest AI in offices.

    And very often, the demand is not “do this better,” but “don’t do this at all.”

    But banning tools does not regulate infrastructure. And moral arguments do not build water policy, energy policy, zoning law, or tax systems.

    Trying to abolish AI is like trying to abolish factories in the 19th century or electricity in the 20th. It misunderstands what kind of thing this is.

    AI is not a gadget.

    It is a general-purpose industrial capability.

    The choice is not whether we have it.

    The choice is whether we govern it — or let it govern us.

    Don’t Tax Learning. Tax Impact.

    There is a crucial distinction that gets lost in almost every AI debate: learning is not the problem.

    Training should be free. Experimentation should be free. Exploration should be free. The act of building intelligence — human or machine — is not what threatens society.

    The problem is the industrial footprint.

    AI companies should not be allowed to externalize the real costs of their business.

    They should pay for:

    • The energy they consume
    • The water they consume
    • The strain they put on infrastructure
    • The communities they disrupt
    • The environmental risks they create
    • The economic disruption they cause

    If a company needs a data center, it should not be allowed to quietly drain a community’s water table. It should not be allowed to build in a desert where water cannot be replaced. It should not be allowed to overload power grids without paying to upgrade them. It should not be allowed to extract enormous value while leaving everyone else with the bill.

    Right now, AI looks cheap because we are subsidizing it without admitting we are doing so.

    If AI companies were forced to pay the true social, environmental, and infrastructural costs of their operations, AI would become more expensive.

    And that would be healthy.

    It would force the industry to:

    • Build more efficient systems
    • Choose locations responsibly
    • Invest in energy and water infrastructure
    • Slow down reckless scaling
    • Optimize for quality instead of brute-force compute
    • Use AI where it actually helps instead of everywhere

    That is not anti-innovation.

    That is how every serious industry is supposed to work.

    AI as a Human Good

    AI has the potential to be a public good: a tool that amplifies learning, creativity, accessibility, science, medicine, and problem-solving. Used well, it can help us design better systems, model climate outcomes, improve education, reduce waste, accelerate research, and lower the barrier to creation and understanding. But that only happens if we separate the idea of intelligence from the business model of extraction.

    Right now, we are repeating a very old mistake: privatizing the benefits while socializing the costs. If AI is going to reshape work and society, then AI companies should be paying into:

    • Worker transition and retraining
    • Community infrastructure
    • Energy and water resilience
    • Public research
    • Education systems
    • Cultural ecosystems

    Not as charity. As the cost of doing business.

    Why Government Has to Step In

    Markets will not solve this. They never have.

    Left alone, companies will build where it is cheapest, extract until something breaks, and call it “efficiency.” They will call it “innovation.” They will call it “inevitable.” That is why industrial societies invented:

    • Zoning laws
    • Environmental regulation
    • Infrastructure planning
    • Utility oversight
    • Industrial taxation

    We need the same level of seriousness for AI. Not to stop it. To shape it.

    The Real Choice

    The real danger is not that AI will make music, images, or text. The real danger is that we let a planetary-scale industrial system grow without ever forcing it to account for its true costs.

    The anti-AI movement is right about one thing: this transition is dangerous. But it is wrong about the solution. We do not need abolition. We need governance.

    We should not tax learning. We should tax impact. If we do that, AI does not become something that happens to us. It becomes something we use deliberately.

    And that is the difference between a future driven by fear and a future built with intention.

  • When More Music Isn’t a Problem: The Myth of the AI Glut

    When More Music Isn’t a Problem: The Myth of the AI Glut

    In a Facebook group discussion the other day, an artist shared their frustration after a heated exchange elsewhere. The flashpoint was simple enough: someone had said they’d heard about an artist with two hundred albums released. Another musician chimed in to say they had about twenty themselves. Suddenly, the tone shifted. Their music was written off as “bad,” accused of “messing up Spotify’s algorithms,” and even of “ruining the path to the top for talented AI musicians.”

    The hostility was enough to make them leave the group. What struck me most wasn’t just the unnecessary anger but the deeper pattern it reflects. For as long as music has been recorded and shared, there have always been people ready to cry “glut” the moment more voices enter the room.

    A Familiar Complaint Through History

    The idea of “too much music” is not new. It has surfaced every time technology opens the door for new creators:

    • Sheet music and publishing houses in the late 19th century lowered the barrier for amateur composers. Critics complained about “cheap ditties” flooding the market.
    • The phonograph and 78 RPM records in the early 20th century let regional acts press and distribute their songs. Suddenly, music wasn’t confined to elite concert halls—and some gatekeepers weren’t happy.
    • The cassette era in the 1970s and 80s gave rise to home tapers and underground distribution networks. Once again, cries of “noise,” “copycat,” and “oversaturation” followed.
    • Digital recording and CD burning in the 90s. Same story.
    • File-sharing platforms, MySpace, and SoundCloud in the 2000s and 2010s. Ditto.

    Now it’s AI music—tools like Suno, Udio, and others—that have expanded access once again. And predictably, the narrative of a “glut” has returned.

    The Gatekeeping Instinct

    What these reactions really reveal is not a problem with the amount of music, but a discomfort among those who feel their position is threatened. Gatekeepers—whether they’re critics, labels, or simply people who’ve worked hard to navigate a system that seemed more exclusive—often react with suspicion or hostility when that system suddenly widens.

    They say there’s “too much.” What they really mean is: “Too many people who aren’t like me now have access.”

    This isn’t just about music. We’ve seen it in writing, in art, in journalism. Every new medium that democratizes creation sparks the same argument.

    Do More Songs Really Mean More Noise?

    Here’s the thing: no listener is sitting at home, manually scrolling through a million new uploads every day. Discovery doesn’t work that way. People find music through:

    • Algorithms (Spotify, YouTube, TikTok).
    • Playlists curated by people or by brands.
    • Labels that select and promote particular artists.
    • Communities that share recommendations.

    If I release one song and someone else releases 100,000, it doesn’t mean listeners are slogging through 100,001 files to stumble onto mine. It means the discovery systems are choosing which ones to surface. In practice, most people encounter only a sliver of what’s out there—whether the total pool is a thousand songs or a billion.

    So the panic about being “drowned out” doesn’t hold water. More music doesn’t prevent anyone from being heard. If anything, it gives listeners more chances to discover something they connect with.

    The Real Argument

    Once we cut through the “glut” complaint, the conversation usually shifts:

    • “But most of this music is bad.”
    • “These tools let untalented people dilute the pool.”
    • “It makes it harder to find the ‘real’ artists.”

    This reveals the real anxiety: that the value of music is tied to exclusivity. If anyone can make it, then what does that say about those who once held the keys?

    But history shows us the opposite. The democratization of tools—cassette four-tracks, DAWs, and now AI—has expanded music culture, not ruined it. For every critic who cried “glut,” there was a listener who found a new favorite song that never would have existed otherwise.

    Why Suno and AI Music Feel Threatening

    AI music creation tools like Suno change the scale of production dramatically. A motivated creator can release songs daily, weekly, even in massive batches. That challenges old ideas about the pace of release, artistic labor, and scarcity.

    For some, this feels like a devaluation. But it isn’t. The value of a song isn’t determined by how hard it was to make, or how long it took. It’s determined by whether it connects with someone. A track that resonates—whether it took six months in a studio or six minutes in an AI tool—has done its job as art.

    And here’s the part critics miss: audiences don’t care how the sausage is made. They care how it makes them feel.

    More Songs, Not Fewer, Matter

    Consider this: when the average listener opens Spotify, they don’t think, “Wow, there are 120,000 new songs today, I’ll never keep up.” They think:

    • “What should I listen to right now?”
    • “What’s in my Release Radar?”
    • “What did my friend send me?”

    The abundance of music doesn’t overwhelm—it enriches. Because in that massive pool, the chances increase that a listener somewhere will find exactly the sound, the lyric, the mood that clicks for them.

    And in an age where niche communities thrive online, even a song that reaches only 100 listeners can matter deeply to those 100 people. That’s not dilution. That’s expansion.

    The Uneasy Future of Gatekeepers

    So why the anger? Because on some level, those clinging to the “glut” argument feel uneasy about the future. They see tools like Suno lowering the walls, and they fear their place is less secure.

    But they shouldn’t. If their music matters—if it connects—it will continue to find listeners. Just as it always has.

    Technology doesn’t replace human connection. It multiplies opportunities for it.

    Constructive vs. Destructive Community

    That’s why the Facebook user’s final reflection hit me: they left the toxic group and decided to stay in one where people uplift each other, where criticism is constructive rather than dismissive. That’s the model we should aim for.

    Because in the end, music is not a zero-sum game. One person’s release doesn’t subtract from another’s. The pie isn’t shrinking; it’s infinite.

    We can choose to see abundance as dilution—or as possibility.

    And the truth is, the more voices we welcome, the more vibrant our collective sound becomes.