Messaging and AI tools have all been used for elections in some way or another, reveals Riccardo Amati from the Mobile Ecosystem Forum (MEF)
In a year where various elections have required half the world’s population to visit polling stations, mobile messaging apps like WhatsApp, Telegram, Signal, and Facebook Messenger have emerged as the primary platforms for political campaigning. Social media giants Facebook, X/Twitter, Instagram and so forth still play an important role, messaging apps have taken centre stage for their immediacy, privacy, and intimacy, allowing campaigns to connect with voters in fresh ways. These apps offer new possibilities for grassroots engagement and targeted communication, yet they also bring risks such as disinformation, echo chambers, and privacy concerns.
Technology in politics
Messaging apps are ideal for sending rich multimedia content, such as videos, infographics, and campaign posters, directly to voters.
“The way people are using this apps is continuing to evolve,” said Katie Harbat, Head of International Affairs at Duco and former Public Policy Director at Facebook, in an interview. “In particular, short form video is a popular way people want to consume information, today. You need to be able to create these types of content. Political candidates needs to constantly evolve.”
Campaigns frequently use these formats to explain complex policies, counter misinformation, or even show personal moments of candidates, fostering a sense of familiarity. Short videos and voice notes have been particularly popular, allowing candidates to deliver messages in their own voice, often informally, which resonates more with voters compared to formal speeches or statements.
Artificial intelligence (AI) has also played an unprecedented role in the many elections in 2024, contributing to more sophisticated voter outreach, enhanced data analysis, and real-time monitoring of public sentiment.
Machine learning algorithms analyse data on voter preferences, allowing political organisations to tailor messages and better understand constituents’ needs. Rather than bombarding voters with generic messages, AI enables campaigns to address specific issues that matter to each demographic, fostering a more informed and engaged electorate.
Many candidates use AI-driven chatbots on social media and websites to answer voters’ questions and clarify their positions on key issues. These chatbots make information more accessible and can respond 24/7, enhancing voter access to information.
In countries facing logistical challenges, AI has contributed to making elections more efficient. AI algorithms help optimise polling station locations, predict voter turnout, and even assist in managing voting infrastructure.
However, AI has also introduced new risks, including the spread of disinformation, voter manipulation, and privacy concerns.
Misuse of AI
One of the most concerning applications of AI in recent elections has been the spread of deepfake technology. Deepfakes—videos or images manipulated by AI to make it appear as if someone said or did something they didn’t—have been used to discredit candidates, confuse voters, or generate outrage. Another serious issue is disinformation campaigns. Bots can generate fake news articles, spread false information, and influence public opinion. These AI-generated posts can go viral quickly, creating an atmosphere of mistrust and division that is challenging to counteract.
In India, deepfakes were used in campaigns at the state level to target political opponents. Several cases involved the creation of fake videos showing opposition leaders allegedly making derogatory remarks or taking controversial stances. In one instance, a viral deepfake video depicted a prominent leader endorsing unpopular policies, which was quickly shared on WhatsApp and other messaging apps, reaching thousands before fact-checkers could intervene.
The US presidential election saw deepfakes used to create videos of candidates making inappropriate or damaging remarks that they never actually said. In some cases, these videos showed candidates seemingly struggling to answer basic questions or forgetting essential facts, aiming to create doubt about their competence. Some deepfakes were sophisticated enough to mimic mannerisms and vocal inflections, making them difficult to spot as fake at first glance. Although fact-checkers and platforms worked to address these, the videos reached wide audiences.
In Brazil, deepfakes targeted trust in the electoral process itself. Videos emerged that appeared to show government officials discussing rigging the election results. These videos, though later debunked, created widespread public distrust and led to protests questioning the integrity of the electoral system. Messaging apps like WhatsApp were again central to the dissemination of these deepfakes, allowing them to reach large audiences before fact-checking could catch up.
In Kenya, deepfakes focused on manipulating candidates’ words and sentiments on sensitive issues such as ethnic relations and social programs. Some deepfake videos had politicians allegedly making inflammatory remarks aimed at particular ethnic groups, which heightened social tension in an already polarised climate. The videos were mainly circulated through Facebook and Telegram.
These examples illustrate the broad and diverse applications of deepfakes in the political realm, often with significant consequences. Although some social media platforms have policies to detect and remove deepfakes, messaging apps operate within closed networks. It is difficult for regulators to monitor content and tackle the spread of misinformation. There is a regulatory void, which raises concerns about the integrity of electoral processes and the potential for abuse.
Benefits and risks
In 2024, mobile messaging apps have redefined political campaigning. With their combination of reach, immediacy, and personalisation, these apps have become essential tools for engaging voters, sharing information, and building momentum. However, they, together with the increased use of AI, also bring unique risks that can compromise transparency, amplify misinformation, and deepen social divisions.
As messaging apps continue to dominate political campaigns, governments and technology companies will need to find ways to balance the benefits of these platforms with the need for accountability and transparency.

Riccardo Amati is from MEF (the Mobile Ecosystem Forum), a global trade body established in 2000 and headquartered in the UK with members across the world. As the voice of the mobile ecosystem, it focuses on cross-industry best practices, anti-fraud and monetisation. MEF provides its members with global and cross-sector platforms for networking, collaboration and advancing industry solutions.
There’s plenty of other editorial on our sister site, Electronic Specifier! Or you can always join in the conversation by commenting below or visiting our LinkedIn page.