advertisement
Your Data, Your Power: Celebrating Data Privacy Day
Today is Data Privacy Day.
This year, the Office of the Data Protection Commissioner (ODPC) in Kenya celebrated it in Eldoret City under the theme Safeguarding Personal Data To Spur Digital Transformation and Economic Development.
This approach comes at a critical time when artificial intelligence (AI) is revolutionising the world. AI adoption is now mainstream, embedded in our day-to-day activities despite initial consumer reservations and is now expanding its reach in data collection by the deployment of technologies such as voice recognition and natural language processing. AI-powered virtual assistants like Siri, Amazon Alexa, Google Assistant and Microsoft Azure Speech Services have revolutionised how AI technology is utilised worldwide, making it easier, faster and more convenient for the everyday consumer to make use of sophisticated AI systems.
advertisement
Most AI systems we encounter today are classified as “narrow AI.” These systems are deliberately designed to excel in specific tasks or domains. Often referred to as “augmented intelligence,” they are intended to enhance, rather than replace, human intelligence. In contrast, artificial general intelligence (AGI) refers to AI capable of performing across multiple fields with human-like versatility.
Expanding on the concept of AGI, artificial superintelligence is envisioned as AI that not only achieves general intelligence but also surpasses human intelligence across most fields. As digital transformation accelerates, AI has made its way onto social media applications, and digital platforms for e-commerce, e-government, telemedicine and others. For software/app developers, programmers and code developers, AI is now seen as a powerful tool making valuable contributions in content moderation online, data analysis, ‘automated journalism’, targeted advertisements and marketing, and fraud detection. However, as AI technology grows, so do its concerns surrounding security, accuracy, legal responsibility and privacy.
Kenya currently lacks a comprehensive regulatory framework or specific policy on AI, relying instead on existing laws like the Data Protection Act (DPA), which are insufficient to address AI’s complexities. This data protection regulatory gap raises challenges in safeguarding data privacy, managing ethical concerns, and ensuring the safe use of AI technologies. Recognising this, the Ministry of Information, Communications and The Digital Economy (MICDE) published a draft of Kenya’s National AI Strategy (aka the Strategy) in January 2025, following a consultative process. The Strategy aims to create an environment that fosters AI-driven economic growth, improves public services, and supports inclusive development, aligning with Kenya’s vision to become a regional AI hub.
advertisement
The primary legislation governing data privacy in Kenya is the DPA. The DPA aims to regulate the processing of personal data and establish a regulatory framework for the collection, handling, and sharing of personal information. Under the DPA, data subjects have the right to know how their data is being processed, as well as the right to access and rectify their data. In addition to the DPA, there are additional sector-specific regulations that apply to certain industries such as healthcare, financial services, and telecommunications, to name a few.
Data Privacy And Its Interplay With AI Privacy
AI privacy is deeply intertwined with the broader concept of data privacy. Data privacy, often referred to as information privacy, is founded on the principle that individuals should have control over their personal data, including the ability to determine how companies (data controllers and processors) collect, store, process, and use that data. However, while data privacy as a concept predates the advent of AI, the emergence and rapid advancement of AI technologies have significantly reshaped how data privacy is understood and approached by both users of AI systems and technology companies.
AI’s reliance on data sets, including personal data, underscores the need for robust safeguards to ensure transparency, accountability, and consent. Protecting sensitive information is critical to maintaining trust, which is essential for fostering digital transformation and unlocking AI’s potential to drive economic development while safeguarding individuals’ rights and freedoms.
advertisement
AI Privacy And Data Privacy Best Practices
The principles of data protection provided under the DPA emanate from the ever-present need to preserve the right to privacy, i.e., ensuring there is a lawful basis to process personal data, which in certain instances may include obtaining informed consent from data subjects, the revocability of consent, the use and storage of data for a specific purpose and period, and the eventual destruction of any data that is no longer in use.
These examples show the extension of data privacy to third parties dealing with a data subject’s personal data even while training algorithms and models, mandating them to ensure that this data is kept safe and private as far as the disclosed data collection and processing needs. This means that companies, tech developers, governments, internet service providers, healthcare practitioners, AI research labs, social media platforms, etc., are all responsible for safeguarding a user’s data privacy.
Organisations can refine their AI privacy approaches to comply with data protection principles and build trust with users and regulators. Our key recommendations from a general perspective would include:
- Conducting risk assessments on all models trained and used, with a keen focus on data leakage.
- Limiting data collection – companies, developers, and coders should restrict the collection of training data to what can be lawfully obtained and used in a manner that aligns with the reasonable expectations of the individuals whose data is being collected.
- Seeking and confirming consent – provide the users with mechanisms for consent, access, and control over their data where applicable.
- Following security best practices – such practices include using encryption, pseudonymisation, anonymisation, cryptography, and access-control mechanisms where possible in line with business considerations.
- Providing more internal controls and protection for data collected from sensitive sectors – these sectors include health, employment, education, criminal justice and financial services. Data generated by or about children is also considered sensitive, even if it does not specifically fall under one of the listed sectors.
- Reporting on data collection and storage – companies should also proactively provide general summary reports and updates to users about how user data is used, accessed and stored.
In conclusion, safeguarding personal data is essential for building trust, driving innovation, and supporting digital transformation. By ensuring transparency, accountability, and ethical use of AI within the context of data protection, stakeholders can unlock its potential to enhance public services, create economic opportunities, and foster inclusive commercial growth. Robust data protection practices protect individual rights while enabling trust-driven innovation and sustainable economic development.
This article was written by Ariana Issaias, Partner, and Richard Odongo, Associate, Bowmans Kenya