Uncategorized

The Political Implications of Artificial Intelligence

Will AI shape history or serve existing power structures?

Can artificial intelligence independently alter the trajectory of human civilization? Or will it follow the pattern of past technologies—serving specific agendas and reinforcing existing inequalities?

Karl Marx once famously said, “The hand mill gives you society with the feudal lord; the steam mill, society with the industrial capitalist.” Time and again, we’ve witnessed new technologies reshaping economic structures and, in turn, political authority.
Now, as AI rapidly becomes a key force in our production systems—mirroring the transformative roles of the hand and steam mills—questions arise: who will benefit from this new technology? Will AI develop its own agency, or simply empower certain groups while marginalizing others?

Recent headlines have been filled with unsettling examples of AI’s capabilities—deepfake interviews with incapacitated celebrities like Michael Schumacher, fabricated images of public figures like Donald Trump, and AI-generated student essays. These hyperrealistic creations have sparked alarm among thought leaders about the societal threats this technology may pose.
In response, leading figures such as Apple co-founder Steve Wozniak, AI expert Yoshua Bengio, and Tesla CEO Elon Musk signed an open letter in March. The letter warned of an “out-of-control race” among AI developers, urging a pause in development. Even Geoffrey Hinton, one of AI’s founding researchers, resigned from Google to speak openly about his regrets and fears.

Still, while we acknowledge the risks AI presents, we do not believe, unlike Wozniak or Hinton, that it can independently shape the future. That’s because AI—like all technology—is deeply embedded with human values, priorities, and cultural norms. Philosopher Donna Haraway put it best: “Technology is not neutral. We’re inside of what we make, and it’s inside of us.”

Before we go further, we must first define what AI actually is today. This is difficult not only because of its complexity but also due to the way media inflates its capabilities.

The popular narrative suggests that we’re on the brink of creating machines with consciousness, similar to those depicted in The Matrix, Blade Runner, or 2001: A Space Odyssey.
This portrayal, however, is misleading. While computers are getting better at performing complex tasks, there’s no real evidence that we’re close to building machines that can truly think.
Linguist Noam Chomsky, along with Ian Roberts and Jeffrey Watumull, recently argued in The New York Times that language models like ChatGPT operate very differently from the human mind. These systems rely on statistical pattern recognition, not actual reasoning or understanding.
To paraphrase philosopher Martin Heidegger: “AI doesn’t think—it just calculates.”

Federico Faggin, creator of the first commercial microprocessor (Intel 4004), draws a sharp line between the objective, transferable knowledge of machines and the subjective, lived experience of human beings. In his 2022 book Irriducibile, he emphasizes that true understanding requires consciousness—something AI fundamentally lacks.

So what does all this mean for humanity’s future? If AI, like Chiron the centaur, is super-intelligent but incapable of thought, then who exactly stands to gain from its development? What values will it promote or enforce?

Chomsky and colleagues asked ChatGPT whether it could make moral decisions. Its response: “As an AI, I do not have moral beliefs or the ability to make moral judgments.” This detachment mirrors the supposed neutrality of classical liberalism, which seeks to separate values from public life in favor of market-based rationality.

In truth, AI seems to reinforce this same logic. It’s becoming a new frontier for capitalist innovation—displacing workers, automating tasks, and aligning its “ethics” with market efficiency. The emerging reality is concerning: AI is not ushering in an age of moral enlightenment but one shaped by commercial interests and market-driven reasoning.

David Krueger, a machine learning professor at the University of Cambridge, recently remarked that most AI research is funded by major tech companies. He warned that public trust may soon erode as people begin to see AI optimism as conflicted self-interest rather than well-founded analysis.

Ultimately, if society takes a stand, it may be able to challenge the techno-determinism that Marx predicted. But for now, AI appears to be firmly rooted in—and reinforcing—the logic of free market capitalism.

The true threat of AI isn’t just its ability to generate convincing fake content or distort reality. The deeper concern is that it’s emerging as a tool for extending the same individualistic, extractive values that underpin modern capitalism—undermining solidarity and community in the process.

Leave a Reply

Your email address will not be published. Required fields are marked *