advertisement
Should Artificial Intelligence Use Be Regulated
From where we stand, many of us are conversant with AI and have easy access to it. Humans have honed the skills and know-how to navigate AI and other technological counterparts. Nowadays, almost everyone is turning to AI for its efficacy.
AI not only possesses the capability to generate comprehensive reports but can also create incredibly realistic images that could deceive even the wisest. For instance, a recent picture of Pope Francis donned in a white puffer jacket surfaced on the internet shocking netizens globally, only for them to discover they were AI-generated by a 31 year old construction worker using an AI image-generating app. You would be surprised to learn the number of people that believed the information. What if the creator chose to create an offensive and divisive photo? Another example is a photo that gained traction on the world web of Elon Mask kissing what is termed as a robot wife, which was ultimately proven to be false and AI-generated.
The dependency level on AI has reached alarming heights. Using AI technologies isn’t inherently bad, but it comes to a point where it might derail one’s productivity and creativity. The amount of articles generated from AI raises concerns as it becomes challenging to differentiate between articles written by humans and those generated by AI if one lacks discernment.
advertisement
China, known for its significant consumption and embrace of technology, has implemented AI regulatory guidelines. Another Chinese company, the Chatbot platform Glow, released new rules on the amount of time users can spend on the platform to prevent excessive reliance on generated content. If users exceed the time limit, they are blocked from sending messages to the Chatbot.
Certain AI platforms have been accused of having misogynist and sexual behaviour which raises questions on the credibility of AI. Glow specializes in creating character-driven “intelligent agents”; in cases where users have developed close or romantic relationships with their Chatbot companions, such a restriction could be emotionally distressing.
Testifying before Congress on May 16, OpenAI’s Chief Executive Sam Altman said it was time for regulators to start setting limits on powerful AI systems.
advertisement
“As this technology advances we understand that people are anxious about how it could change the way we live. We are too,” Altman told a senate committee “If this technology goes wrong, it can go quite wrong,” he said, claiming it could do “significant harm to the world.” He also agreed with lawmakers that government oversight will be critical to mitigating the risks.
AI has faced accusations of biases and discrimination against women and people of color. However, for an AI to be biased it depends on the data fed to the AI for it to generate, and this bias can harm a company’s image and lead to lawsuits if not corrected.
Similarly, like any human, AI is not infallible and errors can occur. With the excessive reliance on generative AI, the proliferation of false information may become a matter of concern. For example, in a recent incident, a lawyer reportedly presented fake cases obtained from ChatGpt and quoted them in court. Not only did it put the lawyer’s job at risk but also was a negligent act. Humans should have limits to AI, it is undeniable that integrating technology will change our future for the better, leading to a swift coexistence with our daily activities. However, as the adage goes, too much of something is poison.
advertisement
Africa is slowly embracing the tech world. According to recent research by The Centre of the Study of Economies of Africa (CSEA), the AI readiness of a nation is based on the level of maturity of a nation’s technology sector, country’s data, and infrastructure capacity, and the government’s ability to regulate and support the development of AI.
Kenya has been ranked fifth on its AI readiness adoption in public service. This is according to the Oxford Insights Government AI Readiness 2022 index report with an average score of 28.76% out of the global average of 35.17%. This is alarming given how AI harms are manifesting in society today. With AI hosting millions of users, the chances of misinformation, fake news, and websites cannot be overlooked. The countries below average in AI readiness could easily fall prey to misinformation if regulation policies are not implemented to protect them from such incidents.
The government should take the initiative for initiating and implementing the regulatory guidelines to curtail the surge of misinformation and protect its citizen’s innocence. While the Western governments are actively working on implementing regulations, the African government should also collaborate and develop regulatory frameworks tailored to serve African interests and protect them against harm. By doing so, it will allow Africa to reap the benefits of trustworthy AI.
African policymakers must step up and enforce laws that will work toward having a more transparent and credible AI technology. Africa is nonetheless part of the evolving world, it needs the best exposure to technology.