advertisement
5 Strategies That Can Accelerate AI Use In Africa
With the growing appreciation of Artificial Intelligence (AI) across the world, African governments, policymakers and industry stakeholders have been urged to collaborate to advance AI governance and drive adoption across the continent.
Speaking during the second session of the Africa AI Journalist Academy organised by Microsoft, Akua Gyekye, the Government Affairs Director for Microsoft Africa unveiled five strategies that can help accelerate AI use in Africa.
The first strategy is to implement and build upon new government-led AI safety frameworks. Several African countries have already begun to formulate their own legal and policy frameworks and are helping to lead discussions around AI policy and strategy development on a regional, continental, and global scale, offering valuable insights for other countries looking to do the same. As part of these efforts, the African Union (AU) continues to convene experts from across the continent and this year published a policy draft containing a comprehensive continental strategy for AI regulations for African countries.
advertisement
“This coordinated approach aims to consider the “responsible, safe and beneficial use” of the technology for all Africans. Once adopted, this framework would help countries that lack AI policies or regulations to create their national strategies and would also urge those that have them to revise and harmonise their policies with the AU’s,” noted Gyekye.
The second strategy calls for safety brakes for AI systems that control critical infrastructure. While most potential AI scenarios do not pose significant risks, it’s going to be increasingly important to identify those high-risk situations that will require ‘safety brakes’, said Gyekye. “One way for governments to begin developing this safety mechanism is by defining the class of high-risk AI systems that are being deployed to control critical infrastructure and then requiring developers to build and embed such added layers of security in the form of safety brakes.”
The third is to develop a broader legal and regulatory framework based on the technology architecture for AI. To address AI’s legal and regulatory challenges, a framework mirroring AI’s technology architecture is needed.
advertisement
Fourth, is to promote transparency and ensure academic and public access to AI. “A key aspect of AI policy that will require serious discussion in the coming months and years is the balance and tension between security and transparency, which will be important to improve the understanding of security needs and develop best practices,” explained Gyekye.
The fifth strategy calls for new public-private partnerships to use AI as an effective tool to address the inevitable societal challenges that come with new technology.
“AI is a powerful tool with immense potential for good. Like with other technologies, however, there are some who will try to use it as a weapon. Fortunately, the technology can also be harnessed to fight against the abuse of AI and to address societal challenges. Public and private partnerships between governments, companies and NGOs will be needed to drive progress in this and other key areas, from skills development to sustainability efforts,” said Gyekye.