How Artificial Intelligence Could Disrupt Nigeria’s 2027 Elections
In the days leading up to Nigeria’s 2023 presidential election, a controversial audio recording began circulating widely across social media platforms. The clip allegedly captured a private conversation involving Atiku Abubakar, former vice president of Nigeria; Aminu Tambuwal, former governor of Sokoto State; and Ifeanyi Okowa, former governor of Delta State.
In the recording, the three men appeared to be discussing plans to manipulate the election. The audio quickly ignited outrage online, with many Nigerians demanding that the Independent National Electoral Commission (INEC) investigate and stop what they believed was a conspiracy.
However, fact-checkers at TheCable later determined that the audio was not authentic it was a deepfake generated using artificial intelligence.
By the time this discovery was made, the clip had already spread across multiple platforms and offline networks, shaping public opinion and fueling political debate. The incident illustrated just how easily synthetic media can influence political discourse during election cycles.
With Nigeria’s next presidential election scheduled for January 2027, experts warn that artificial intelligence could play a far more disruptive role than it did four years earlier.
AI Misinformation Already Shaped the 2023 Election
The 2023 election cycle was already marked by widespread misinformation, including the circulation of AI-manipulated media designed to promote specific political candidates.
One prominent example occurred in November 2022, roughly three months before the election. A manipulated video featuring Hollywood actors holding a placard reading Yes, it makes sense to vote for Peter Obi in 2023 went viral across Nigerian social media.
Similarly, fabricated videos appeared to show Elon Musk and Donald Trump endorsing Peter Obi. These clips amassed thousands of views and engagements before fact-checkers confirmed that they were digitally altered.
Such incidents demonstrated how quickly AI-generated propaganda can spread during politically charged periods, often reaching millions before corrections are issued.
Rapid AI Adoption in Nigeria
Since the 2023 election, artificial intelligence has become significantly more mainstream in Nigeria.
A recent survey conducted by Google and Ipsos revealed that:
88% of Nigerian adults say they have interacted with an AI chatbot.
39% report using AI regularly in their daily work or personal life.
This rapid adoption is partly driven by the accessibility of AI tools. Platforms such as ChatGPT, Gemini, and Claude offer free versions alongside premium subscriptions, making them available to a broad user base.
Nigeria’s digital infrastructure also supports this growth. Approximately:
142 million Nigerians have internet access
85% of the population owns a smartphone
By the time Nigerians head to the polls again in 2027, AI tools could play a much larger role in shaping political narratives, influencing voters, and distributing campaign messaging.
While artificial intelligence offers legitimate benefits for communication and civic engagement, experts warn that the absence of clear regulatory frameworks could leave the electoral process vulnerable to disinformation campaigns and manipulation.
AI Could Transform Political Campaigning
Nigeria’s complex social landscape featuring more than 250 ethnic groups and languages presents unique challenges for political communication.
AI-powered technologies could help political campaigns address these challenges by enabling:
Automatic translation into multiple local languages
Personalized messaging tailored to specific voter groups
Faster engagement with large audiences through automated tools
In theory, these capabilities could improve voter education and political participation.
Kola Ijasan, Research Director at Research ICT Africa, says artificial intelligence has the potential to improve democratic engagement but warns that it must be used responsibly.
AI has the capability to enhance voter education, government efficiency, and citizen participation. However, without administration, particularly during elections, the dangers of excessive exposure and abuse are real,Ijasan explains.
Synthetic Media Poses the Biggest Threat
Despite its potential benefits, artificial intelligence introduces significant risks to the electoral environment particularly through synthetic media.
AI systems can now generate highly realistic:
- Videos
- Images
- Voice recordings
- Campaign posters
- Fake news screenshots
These tools can be used to create deepfake videos showing politicians saying things they never said or to fabricate audio recordings that mimic a candidate’s voice.
AI can also generate large volumes of propaganda content at scale, flooding social media platforms with coordinated narratives designed to mislead voters.
According to Mayowa Tijani, a journalist and fact-checker at TheCable, the major shift since the last election is not the existence of AI-generated content but how fast, accessible, and convincing the technology has become.
We can expect the kind of AI use we experienced in 2023, but the sophistication has gotten a lot better. As a result, even for fact-checkers, it will be difficult to differentiate what is real and what is not, Tijani says.
He also warns that AI could be used to fabricate election materials themselves.
Where it gets more problematic is when we see AI being used to forge election results because of how good they’ve gotten. We can have an AI-generated result with human handwriting. This is something we’ve not experienced before that we could experience in the coming elections.
Technical Barriers Still Exist But Not for Political Actors
Although AI tools are becoming more powerful, producing convincing deepfakes still requires technical expertise and resources.
Ayomide Odumakinde, an AI researcher at Cohere, explains that generating high-quality synthetic media often requires more than simply using free tools.
Most of the high-quality tools that can be used for video deepfakes are locked behind subscriptions. The open source variants are not as great, Odumakinde says.
However, he notes that audio deepfakes are significantly easier and cheaper to produce, and are often harder to detect than manipulated videos.
Dr. Jeffery Otoibhi, a medical doctor and AI research engineer, adds that while AI systems sometimes struggle with aspects of Nigerian culture such as traditional clothing or local skin tone variations skilled developers can overcome these limitations.
It takes a lot of work, patience, and understanding of the AI system, and not everyone can do it. Many of the videos that circulate on Facebook and WhatsApp are easy to detect in the first few seconds, he says.
Even so, these technical barriers are unlikely to deter well-funded political actors who can hire experts to produce convincing disinformation campaigns.
Nigeria’s Information Ecosystem Is Highly Vulnerable
Nigeria’s digital environment already struggles with misinformation, making it particularly susceptible to AI-driven manipulation.
Several factors increase the country’s vulnerability:
- Low levels of media literacy
- Difficulty distinguishing authentic content from manipulated media
- Highly polarized political discourse
- Rapid information sharing on messaging platforms
While fact-checking organizations are working to counter misinformation, experts say their efforts cannot keep up with the scale and speed at which AI-generated content can spread.
Another challenge lies in the limitations of detection tools.
According to Lois Ugbede, Assistant Editor at the fact-checking organization Dubawa, many AI detection systems are trained primarily using Western datasets and may struggle to identify synthetic media in Nigerian contexts.
AI models for detecting deepfakes still have shortcomings, and some of those include the inability to adapt to local language or code switching when people speak, Ugbede says.
WhatsApp Remains the Hardest Platform to Monitor
The biggest challenge may come from private messaging platforms, particularly WhatsApp.
As of April 2024, Nigeria had roughly 51 million active WhatsApp users, making it one of the most influential channels for political communication.
During the 2023 election cycle, political campaigning frequently occurred inside WhatsApp groups, where messages including voice notes and forwarded posts circulated widely with little oversight.
Because WhatsApp messages are encrypted and difficult to trace, misinformation that spreads there is extremely difficult to verify or correct.
Charles Ekpo, a lecturer in Peace Studies at Arthur Jarvis University, says this environment creates a major challenge for fact-checkers.
It is great that people can fact-check with tools like Grok. It makes it easier to know what is real and what is not, Ekpo explains.
The problem is that when this content is forwarded to platforms like WhatsApp, where no one can verify or track its movement, it becomes even more dangerous.
AI’s Role in Elections Is Already Global
Nigeria is not alone in facing these challenges. Across the world, artificial intelligence has increasingly influenced political campaigns.
In 2024, more than 60 countries held national elections, many of which saw the use of AI-generated campaign materials.
Examples include:
- India: Political campaigns reportedly spent $50 million on AI-generated content, including deepfakes of deceased political figures and fabricated celebrity endorsements.
- Indonesia: Campaign teams used generative AI to create cartoon-style images that softened a candidate’s public image and appealed to younger voters.
- Pakistan: AI-generated speeches allowed an imprisoned political leader to deliver virtual messages to supporters.
- United States: Investigations were launched after robocalls using an AI-generated voice resembling Joe Biden discouraged voters from participating in a presidential primary.
These examples highlight how rapidly AI is becoming embedded in modern political communication.
The Greatest Risk: Erosion of Public Trust
Beyond misinformation itself, experts warn that the most damaging consequence of AI in elections may be the erosion of trust in democratic institutions.
Kola Ijasan argues that the biggest danger is not simply that voters may believe false information but that they may begin to doubt everything.
The actual threat is not the fact that voters are not able to distinguish between truth and fiction. It is that trust that erodes, he says.
Once the citizens begin questioning anything, including valid findings, the democracy becomes shaky.
What Experts Say Nigeria Should Do Before 2027
Researchers and fact-checkers say Nigeria must begin preparing now to mitigate AI-related election risks.
One priority is expanding public awareness campaigns, particularly for voters who are not active online.
Mayowa Tijani stresses the importance of reaching voters through radio and WhatsApp, which remain powerful communication channels across the country.
We need to understand that a bigger part of the voting population is still not online, so whatever intervention we are designing has to go beyond the Internet ecosystem, he says.
Experts also call for early funding for fact-checking initiatives, allowing organizations to respond to misinformation before it spreads widely.
At the policy level, Ijasan believes governments should avoid heavy-handed censorship but introduce transparency requirements for political campaigns.
Political actors who utilise AI-generated content during campaigns must be obliged to reveal the usage of the same, he says.
Unambiguous standards of labelling can help decrease deception without curtailing legitimate speech.
.png)
.png)
