Uncategorized

Artificial Intelligence: A Tool for Power and Privilege

AI Surveillance Strengthens Control and Reduces Oversight

What do a Yemeni refugee waiting for aid, a British supermarket worker, and a struggling university student have in common? They are all being assessed through some form of artificial intelligence.

Wealthy nations and global corporations have poured billions into AI—an umbrella term covering computing techniques like machine learning that gather, sort, and analyze vast amounts of data to forecast our behavior.

AI has always swung between intense excitement and disillusionment. Is the robot really going to take my job? For those outside the tech bubble, it’s tough to separate real breakthroughs from exaggerated claims.

Virtual reality pioneer and computer scientist Jaron Lanier has deep insights into AI, having worked with Marvin Minsky, one of AI’s early visionaries. Lanier maintains that AI has largely functioned as a marketing term. Reflecting on past discussions with Minsky, he recalls Minsky admitting that the hype around AI helped secure military funding in earlier decades. Claiming that machines would one day surpass human intelligence made funders eager to invest.

However, Lanier warns that AI often serves as a convenient excuse for the powerful to dodge accountability. If a machine makes a decision, who can challenge it? That’s a question society needs to answer urgently. Machine learning is advancing fast and is now embedded in both major tech firms and under-resourced government systems. In areas from hiring to healthcare to policing, we’re increasingly surrendering decision-making to algorithms—or, more accurately, to those programming them.

AI’s Promise vs. AI’s Risks

AI has brought some remarkable benefits—such as enhancing renewable energy systems or enabling earlier cancer detection. But a major concern lies in how AI is being used to evaluate and predict human behavior, particularly by assigning risk scores.

As a lawyer who handled terrorism-related cases, I’ve long examined how society interprets risk. Think back to former Vice President Dick Cheney’s “one percent doctrine,” which held that even a tiny chance of a terror attack should be treated as a certainty. This philosophy justified preemptive action based on mere suspicion.

That mindset continued into the Obama era, amplified by machine learning. Drone strikes, for example, could be launched not based on who someone was, but what an algorithm suggested about their behavior patterns—who they contacted or where they traveled. As former NSA and CIA director Michael Hayden bluntly put it: “We kill people based on metadata.”

This risk-based approach now influences commercial decisions as well. Whether it’s law enforcement in Los Angeles trying to predict crime, or financial institutions gauging your likelihood of illness, unemployment, or debt, AI is central to risk analysis.

The Digital Divide: From Connectivity to Control

Originally, the “digital divide” referred to unequal access to the internet. The fix seemed obvious—connect everyone. Projects like One Laptop Per Child and Digital ID initiatives aimed to bridge the gap by getting everyone online.

But a newer divide has emerged—between those who understand and control the data, and those who are merely analyzed by it. The knowledge holders—the data collectors and algorithm developers—now hold the upper hand. They shape the world based on their values, while the rest of us are optimized and profiled.

AI is creating a new hierarchy determined by proximity to computational power. This is the deeper issue we must grapple with: how AI can entrench existing power structures.

This isn’t the future we should accept. People are right to question the direction AI is taking, and to push for more democratic control of the technology. I use an iPhone, which limits default tracking more than cheaper Android models used by lower-income individuals.

When I apply for jobs in law or journalism, human interviewers make decisions. Compare that to a job application at a UK supermarket, where candidates may face AI-powered “expression analysis” instead. We shouldn’t tolerate a system where the wealthy receive personalized, human evaluation, while others are reduced to algorithmic profiles.

Computation Must Not Replace Democracy

If we don’t confront what Shoshana Zuboff calls “the substitution of computation for politics,” we risk losing democratic oversight. Critical societal decisions are increasingly being made by prediction engines, not through public debate.

AI must reflect collective values, and that requires public scrutiny and regulation. Just as laws prohibit discrimination, we must decide whether certain algorithmic judgments should be off-limits. Should tech companies face mandatory audits to uncover bias and inequality in their systems?

And what about the tech giants—Facebook, Google—who are shaping the direction of AI and influencing billions of lives? Like past industrial monopolies, they may be too powerful. Perhaps it’s time to break up Big Tech.

These issues affect everyone. Token ethics committees won’t fix them. Only through transparency, independent oversight, and inclusive participation can we create an equitable AI-powered future for all.

Leave a Reply

Your email address will not be published. Required fields are marked *