Uncategorized

What Is the Political Agenda of Artificial Intelligence?

Could AI alone shape the future of humanity, or will it simply become another tool that serves the interests of a select few?

Karl Marx once said, “The hand mill gives you society with the feudal lord; the steam mill society with the industrial capitalist.” History shows that technology often shapes how societies operate and who holds power. With AI emerging as a major productive force, the question arises: who will benefit from it? As AI rapidly advances, seemingly beyond human control, we must ask whether it will direct history or serve existing power structures.

Today, AI-generated content—like fabricated interviews, fake photos, and essays—has sparked deep concern. Examples include a false interview with Michael Schumacher, images of Donald Trump being arrested, and essays written by ChatGPT. These cases have raised alarms among experts and public figures about the potential dangers of AI.

In March, prominent voices like Steve Wozniak, Yoshua Bengio, and Elon Musk signed a letter warning that AI development is spiraling beyond human oversight. Geoffrey Hinton, one of AI’s key pioneers, left Google to openly express his fears, even voicing regret over his role in AI’s development.

While we recognize the real risks associated with AI, we do not believe it will dictate the future without human influence. AI, like all technologies, reflects human values and priorities. As philosopher Donna Haraway noted, “Technology is not neutral. We’re inside of what we make, and it’s inside of us.”

To understand these concerns, we must first define AI clearly—something made difficult by widespread myths. The media often portrays AI as on the brink of consciousness, ready to match the sentient machines of sci-fi. But this portrayal is misleading. In reality, we are building faster and more advanced calculators, not thinking beings.

Linguist Noam Chomsky, along with Ian Roberts and Jeffrey Watumull, argues that AI like ChatGPT doesn’t think but predicts. It matches patterns from vast data sets, lacking true understanding or reasoning. In other words, “AI doesn’t think. It simply calculates.”

Federico Faggin, inventor of the Intel 4004 microprocessor, draws a clear line between machine knowledge and human experience. Machines handle data; humans experience meaning. Faggin even suggests a philosophical view, echoing ancient thought, that consciousness is uniquely human and irreducible.

So what does this mean for the future? If AI cannot truly think, then who shapes the decisions it makes—and whose values does it serve?

Chomsky and his colleagues asked ChatGPT if it has moral beliefs. The response: “As an AI, I do not have moral beliefs… My lack of moral beliefs is simply a result of my nature as a machine learning model.” This echoes the morally neutral stance of classical liberalism, which aims to remove values from public systems in favor of market logic.

In practice, AI appears to reinforce market values. It is poised to become the next global economic driver—automating jobs in fields like law, medicine, journalism, and manufacturing. Its values mirror those of capitalism: efficiency, profit, and detachment from human empathy.

Cambridge researcher David Krueger recently pointed out that nearly all AI research is backed by big tech. As these companies shape AI’s development, skepticism grows over whether reassurances from such insiders are valid or merely hopeful.

If society questions this alignment between AI and capitalism, it could challenge the historical pattern where technology dictates political power. But for now, AI is deeply tied to the goals of free-market capitalism.

The true danger of AI isn’t that it will become a rogue intelligence. The threat lies in its alignment with the values of predatory capitalism, which seeks to undermine community and social unity in favor of individualism and profit.

Leave a Reply

Your email address will not be published. Required fields are marked *