In 2023, artificial intelligence surged into the mainstream, sparking a wave of excitement—and alarm—about its growing influence on society.
ChatGPT Sparks Educational Backlash
The year kicked off with controversy in the education sector as students turned to OpenAI’s ChatGPT to write essays and complete assignments. In early January, the New York City Department of Education became one of the first to ban the chatbot, igniting a larger debate over the ethical use of generative AI in academic environments.
With the popularity of Microsoft-backed ChatGPT rising quickly, other tech giants responded with their own tools, such as Google’s Bard, Baidu’s Ernie, and Meta’s LLaMA. While platforms like Stable Diffusion and DALL-E amazed users with AI-generated art, videos, and code, critics raised alarms about AI-fueled misinformation, cyberbullying, and violations of intellectual property.
Calls for a Pause in AI Progress
By March, anxieties about unchecked AI development reached a boiling point. Over 1,000 prominent voices, including Steve Wozniak and Elon Musk, signed an open letter urging a temporary halt in developing AI systems more advanced than GPT-4 due to risks to humanity. Though a pause never materialized, governments began drafting policies to rein in the technology’s growth.
As 2023 drew to a close, it became clear that it would be remembered as a watershed year for artificial intelligence.
Turbulence at OpenAI
OpenAI, the company behind ChatGPT, grabbed headlines again in November when CEO Sam Altman was unexpectedly fired by the board, which cited a lack of “consistent candor.” Although the specifics remained vague, the move was widely seen as stemming from internal tensions over prioritizing safety versus commercialization.
The firing triggered a week of intense drama, with nearly all OpenAI staff threatening to resign and Altman briefly joining Microsoft before being reinstated alongside a new board. The episode highlighted ongoing struggles in the AI world between ethical responsibility and business ambition.
A July Pew Research Center survey of 305 experts found that 79 percent were either more concerned than excited—or equally both—about AI’s future. Worries ranged from mass surveillance and authoritarian misuse to job losses and deepening social isolation.
Sean McGregor of the Responsible AI Collaborative noted that while public scrutiny of AI development is encouraging, many in tech remain uneasy with the attention. He emphasized that AI needs to reflect the interests of those most affected and framed AI ethics as a modern version of long-standing societal dilemmas.
Global Efforts to Regulate AI
December saw the European Union reach a significant agreement on new AI legislation after a year of regulatory progress by governments and global institutions. Core issues include the source and quality of training data—often scraped from the internet with little regard for consent, accuracy, or fairness.
The EU’s proposed rules require transparency from AI developers about data usage, while setting limits on risky applications and establishing mechanisms for public complaints. The U.S. took a step forward in October with President Biden’s executive order on AI oversight, and the UK hosted the AI Safety Summit in November, bringing together global leaders and tech firms.
China also implemented interim regulations mandating “security assessments” for AI products before release and restricting politically sensitive content in training data.
On the international front, 20 countries—including the U.S., UK, Germany, Israel, and Chile—signed a preliminary agreement to promote AI safety.
AI’s Impact on Jobs and Industry
Legal battles emerged across the U.S. as writers, artists, and media outlets filed lawsuits over copyright concerns linked to AI training. Meanwhile, fears about job automation became a flashpoint in Hollywood, contributing to lengthy strikes by actors and screenwriters.
Goldman Sachs projected that AI could displace up to 300 million jobs, with two-thirds of roles in Europe and the U.S. potentially impacted. Still, many analysts stressed that AI will likely enhance rather than replace most occupations. According to an August report by the International Labour Organization, clerical roles are among the most vulnerable.
A Rising Tide of Deepfakes
Looking ahead to 2024, the proliferation of AI-generated content is set to test global institutions as billions of people head to the polls in more than 40 countries—including high-stakes elections in the U.S., India, Indonesia, and Venezuela.
Deepfakes and synthetic media have already been weaponized in conflict zones like Ukraine and Gaza, and their use in elections is expected to grow. Platforms are responding: Meta announced restrictions on AI-generated political ads, while YouTube now requires labeling of lifelike AI-generated videos.