advertisement
AI Threats Are Rising, Is East Africa Prepared?
From deepfakes to zero-day exploits, the battlefield for digital security is shifting rapidly, and the weapon of choice is artificial intelligence (AI). In recent years, AI has moved from research labs into the mainstream, powering innovations across industries. Banks rely on it for fraud detection, hospitals for diagnosis, and governments for service delivery. Yet, as adoption grows, so does exposure to new risks. Criminal groups are increasingly weaponising AI, creating threats that are more sophisticated, faster and harder to detect.
In Kenya and across East Africa, where digital transformation has accelerated through mobile money, ecommerce and e-government platforms, the region finds itself particularly vulnerable. Adversaries are now using AI to generate highly convincing phishing emails, produce deepfakes that impersonate leaders and executives and run large-scale password-cracking operations at speeds that overwhelm traditional defences.
In some cases, attackers manipulate training data to corrupt systems, a tactic known as data poisoning. These methods are not theoretical — they are already in circulation, and they exploit technologies meant to safeguard organisations. The World Economic Forum’s Global cybersecurity Outlook 2025 notes that almost half of global organisations now view adversarial use of generative AI as their top concern. For East Africa, where investment in security has not always kept pace with digitisation, the stakes are especially high.
advertisement
These developments come at a time when traditional Security Operations Centres (SOCs) are already under pressure. Analysts often find themselves drowning in thousands of alerts each day, many of them false positives, yet each requiring a human investigation. Systems are fragmented, and processes remain largely manual. On top of this, the shortage of cybersecurity professionals is even more acute in Africa than in other regions, making it difficult for organisations to keep pace with rising threats. It is a perfect storm: more sophisticated attackers, more digital assets to protect and fewer skilled defenders.
Ironically, the same technology driving these threats may also provide the most effective defence. AI can make it easier for human analysts by automating boring tasks, linking data from many systems in real time and finding strange things that show zero-day attacks. In security operations, this means faster detection, smarter prioritisation and better accuracy. Generative AI is already being applied to translate dense technical findings into language business leaders can understand, bridging the gap between security teams and decision-makers. For organisations in East Africa, where budgets and skills are often constrained, this capacity to augment human expertise rather than replace it could prove transformative.
Yet, the adoption of AI in cybersecurity raises serious ethical and practical questions. The conversation is no longer about whether AI should be used, but about how it should be deployed responsibly. Data privacy is one of the most pressing concerns, since AI models depend on large volumes of information to function effectively. Without careful governance, sensitive or personally identifiable data can be exposed. Transparency is equally critical. AI systems reflect the biases of their creators, and if they are treated as “black boxes” with unexplained outputs, they risk undermining trust. Organisations must ensure that AI-driven security decisions are explainable and reliable. There is also the human factor to consider. While AI can reduce labour demands, it should not be seen as a replacement for human talent. For a region still developing its cybersecurity workforce, the focus should be on how AI augments capacity, not eliminates it.
advertisement
Kenya and its neighbours are at a defining moment. As fintech services expand, cloud adoption accelerates and government services move online, the attack surface is constantly expanding. AI-powered defences are no longer optional; they are an essential part of building resilience. But technology alone will not be enough. Organisations need to invest in training, so that cybersecurity professionals can work effectively alongside AI tools. They must also encourage cooperation across industries. Banks, phone companies, government agencies and private companies all face similar threats. Sharing information and collaboratively policing using AI within each one’s respective domains will be important to building defenses together. Finally, leaders must embed ethics and governance into every AI deployment, ensuring that the rush to adopt new technology does not compromise trust or safety.
The story of cybersecurity began decades ago with simple worms and viruses. Today, it has evolved into an arms race between adversarial AI and defensive AI. The outcome will depend on how quickly and responsibly organisations adapt. For East Africa, the opportunity lies not only in closing the security gap but also in positioning the region as a leader in responsible digital transformation. If adopted thoughtfully, AI can relieve the crushing burden on overstretched security teams, close the skills gap and create defences that adapt as fast as attackers innovate. It may not be a cure-all, but it has the potential to tip the balance — empowering defenders to outsmart even the most advanced threats.
Lloyd Oandah is Technical Architect, Cybersecurity NTT DATA, East Africa