2024 is a deciding year for crucial political decisions since the world’s biggest and leading democracies are going to hold elections this year, including the U.S. Presidential Elections, the Indian Lok Sabha Elections, and the U.K. General Elections.Â
One factor that is set to play a significant role in these elections is the integration of artificial intelligence (AI) into the electoral process. While it may have its benefits, such as analyzing data and strategizing campaigns, it also brings challenges that must be addressed.
There are a number of AI tools that can generate lifelike images, videos, and voices in a matter of seconds, and this is increasingly becoming a cause for concern since it has the potential to disrupt elections and their outcomes.
The World Economic Forum has released its “Global Risks Report 2024,” and the results are eye-opening. Among the top 10 risks deemed most likely to pose a significant threat over the next two years, AI-derived misinformation and disinformation take the lead. AI-driven falsehoods are like wildfire; they move too fast and too far, and it becomes almost impossible to contain or put an end to them.

In the next two years, nearly 3 billion people will vote in elections in many countries, like the United States, India, the UK, Mexico, and Indonesia. If there’s a lot of fake news and false information during these elections, it could make people doubt whether the new governments are legitimate. This could lead to big problems like protests, violence, and even terrorism. Over time, this could also harm how democracy works in these places.

In a disturbing turn of events, residents of New Hampshire received robocalls from an AI-generated imitation of Joe Biden, urging them to skip an upcoming primary (to select a nominee for the preliminary election). This incident is currently under investigation by the New Hampshire Justice Department.
In April 2023, a politician in the Indian state of Tamil Nadu accused his own party of unlawfully amassing $3.6 billion in a 26-second voice clip, according to Rest of World. The politician rejected the recording’s authenticity, calling it “machine-generated.” Even experts were uncertain whether the audio was real or phony.
Content generated by artificial intelligence tools, be it images, videos, or audio, is becoming increasingly difficult to distinguish from legitimate and authentic content. The advancements made in artificial intelligence technology have enabled the creation of highly lifelike and persuasive content capable of easily deceiving the general public. This poses a substantial threat since it could seriously disrupt electoral processes and undermine public trust in democratic institutions.Â
The potential ramifications of integrating AI into elections are vast, from swaying public opinion to manipulating voting patterns and ultimately altering election outcomes. This disruption of electoral processes and erosion of public trust in democratic institutions is a dangerous path to tread. Fearing this, OpenAI published a blog post announcing new policies to safeguard the democratic process during elections.Â
The blog read, “Protecting the integrity of elections requires collaboration from every corner of the democratic process, and we want to make sure our technology is not used in a way that could undermine this process.”
The AI research and development company is taking proactive measures, including red teaming, user feedback collection, and safety mechanisms, to anticipate and address potential misuse of its tools, such as misleading deepfakes or chatbots impersonating candidates. Tools like DALL·E are going to be equipped with guardrails to decline the generation of images of real individuals, including political candidates.
They will also be implementing election-related policies and inducing bans on building political campaigning and lobbying applications, chatbots impersonating real entities, and applications that discourage democratic participation.
The company’s collaboration with the National Association of Secretaries of State (NASS), will result in ChatGPT redirecting concerned users to the nonpartisan website NASS for accurate and reliable information regarding elections.
Meta also expressed concerns over the potential misuse of generative AI in political advertising. Last year in November, Meta confirmed that it would be blocking political ads generated using its AI-based ad creation tools due to the perceived risks associated with such technology.
AI in elections is a double-edged sword. To effectively utilize its advantages and minimize drawbacks, it is important to implement a comprehensive strategy. It is important to prioritize cybersecurity measures and improve media literacy and critical thinking among voters to help them identify and reject false information.
It is imperative to implement ethical protocols and regulations for AI in political campaigns to ensure accountability and transparency. So that the political landscape is not influenced or dominated by malicious actors.