advertisement
The AI Governance Kenya Needs For Fair Credit Access
The recent Survey on Artificial Intelligence in the Banking Sector, by the CBK, provides an honest and sobering account of where the country’s financial sector stands amidst the AI revolution. This survey comes at a time when the country is taking bold steps in positioning itself for the global digital economy, and the launch of the National AI Strategy early this year gives credence to this. Through the Strategy, Kenya aims to harness AI for economic transformation and citizen-centered innovation.
That notwithstanding, as is often the case with rapid technological evolution, policy ambition now confronts institutional readiness. As per the survey, while 50% of lenders are yet to adopt the technology, 65% of those already using it apply AI for credit risk scoring. This is not surprising as credit decisions sit at the heart of trust in the financial system. They determine who gets access to opportunity and on what terms. Yet this is precisely why it is imperative that we do not postpone AI governance for a future moment. There needs to be an urgent, deliberate and collective effort in tackling emerging governance issues.
AI is indeed a powerful tool that offers financial institutions a new way of reimaging risk assessment. Through AI, financial institutions can move away from the rigid credit scoring solutions which are heavily reliant on collaterals, securities and pay slips, to a more dynamic, data-driven approach, reflective of real-world behaviors. This is greatly supported by the rise of digital footprints transaction histories, mobile money patterns, and behavioral analytics. Consequently, access to credit is expanded to previously unbanked and underserved populations, particularly those working in the informal economy, entrenching financial inclusion.
advertisement
This promise, however, carries with it a sobering caveat: If a well-curated and ethically designed governing framework is not in place, AI can turn into a bad master and entrench exclusion, perpetuate algorithmic bias and erode transparency. Indeed, the CBK survey highlights this. In the fraction of institutions leveraging AI for credit assessment, few have embedded mechanisms for bias detection, explainability or customer redress. This stark revelation goes beyond being a compliance issue to a systemic risk.
Consequently, AI systems, more so in credit, ought to be allowed to operate in black boxes. Where an institution denies a borrower a loan based on the recommendations of an algorithm, the institution should be able to explain that decision to the borrower. A system that cannot give justifications to its decisions is not only ethical but is also legally vulnerable and reputationally dangerous. This duty is even elevated in a society grappling with inequality and digital divide such as ours. Simply put, the financial sector does not have the luxury of perpetuating exclusion under the guise of innovation and automation.
It is at this point that the National AI Strategy and CBK must come into convergence. The values of inclusivity, ethics and human oversight espoused in the AI Strategy need to be anchored in CBK guidelines on AI adoption in the sector. To ensure meaningful impact, not only should these principles be anchored into sector-specific regulatory frameworks but also be anchored on clear supervisory expectations for AI governance in financial services: expectations that address fairness, data integrity, algorithmic accountability, and proportional human oversight.
advertisement
Even as regulators explore possible avenues to catalyze a broader reform agenda including regulatory sandboxes not just for predictive accuracy, but for fairness, transparency, and unintended harms, the compliance function within financial institutions also need to evolve. It is no longer sufficient to review documentation at the end of the product pipeline. Compliance teams must be involved at the design stage of AI systems in interrogating the data being used, the assumptions being encoded, and the outcomes being optimized. There must be a strategic shift from being rule enforcers to risk translators, shaping the internal ethics and external accountability of our institutions.
Ultimately, the future of AI in credit risk will not be determined by technology alone. It will be shaped by how we govern, who we include, and what values we encode into our systems. We must ask ourselves: will AI be a tool for broadening access to finance or a shield for excluding those on the margins? Will it reduce human bias or repackage it into technical language we no longer interrogate?
CBK’s survey has sparked an important conversation. While the National AI Strategy provides a guiding star, the heavy lifting still lies ahead. Financial institutions must make strategic, not cosmetic, investments in AI governance. Regulators must move from observation to obligation and set enforceable standards and ethical thresholds, and the public must demand transparency, especially when algorithms affect their ability to build livelihoods and dignity through access to credit.
advertisement
Kenya has never lacked vision. What we need now is regulatory coherence, institutional maturity, and a shared commitment to fairness. Only then can we say we are truly ready for AI, for risk, and for the future of finance.
This article was written by Jimmie Mwangi, the Group Chief Information and Digital Officer (CIDO), Diamond Trust Bank
The trend, especially in credit, is actually moving away from black boxes toward Explainable AI (XAI)—models that are either inherently interpretable or come with tools to explain their reasoning—precisely to ensure fairness, accountability, transparency and compliance with the law.