advertisement
Excessive Trust In GenAI Makes Users Vulnerable
A survey spanning 1,300 respondents in 10 African and Middle Eastern countries has revealed that 63 per cent of users are open to sharing their personal information, while an impressive 83 per cent expressed confidence in the accuracy and reliability of these AI tools. “While the results clearly show that generative AI tools are widely used, they also highlight the need for increased user training and awareness regarding the potential risks associated with this powerful new technology,” says Anna Collard, SVP Content Strategy and Evangelist at KnowBe4 AFRICA.
When it was introduced in late 2022, ChatGPT revolutionised the way people work. The highly adept chatbot is known for its almost natural and impeccable English, swiftly winning over critics. Business teams discovered they could create marketing campaigns in minutes instead of hours, and effortlessly generate content. However, some argue the world has embraced generative AI a little too hastily.
“The adoption of GenAI offers tremendous opportunities for African users and organisations but we also need to consider the associated risks,” says Collard. “Our survey – which runs across users owning smartphones in 10 African countries – indicates that all respondents are using generative AI in their personal and professional lives, with many using it on a daily or weekly basis.” The primary purposes for using generative AI are research, information gathering, email composition, creative content generation, and document drafting. Respondents highlighted several benefits of using generative AI, including timesaving, assistance with complex tasks, increased productivity, and enhanced creativity.
advertisement
“Despite the hype about job losses and various industries being negatively affected by generative AI, 80 per cent of respondents didn’t feel that it threatened their job security, although 57 per cent believed it has the potential to replace human creativity,” says Collard. “What is interesting in the findings is the disconnect between what people think and the reality, particularly when it comes to cybersecurity.”
Too Willing To Share Sensitive Data
The survey, carried out across South Africa, Botswana, Nigeria, Ghana, Kenya, Egypt, Mauritius, Morocco, United Arab Emirates, and Saudi Arabia, revealed that almost two-thirds of users are at ease sharing their personal information with generative AI tools such as ChatGPT. The comfort level in sharing sensitive data varied across countries. For example, in South Africa, only 54 per cent of users were comfortable sharing their personal information with generative AI tools. In comparison, in the UAE it was 67 per cent, and in Nigeria, it was 75 per cent.
advertisement
According to the survey, 83 per cent of users feel confident about the accuracy and reliability of generative AI, showing an excessive level of trust. Collard comments that it is important to encourage critical thinking and be aware of the psychological biases that make us blindly trust content that is synthetically generated. “For example, research shows that people overestimate their abilities to detect deepfakes and perceive AI-synthesised images as more trustworthy than real faces.”
Another concerning finding is the lack of comprehensive policies in organisations to address the challenges associated with generative AI. Almost half of the respondents (46 per cent) reported having no generative AI policy at work, and 8 per cent stated that they were prohibited from using it. “Employees are already using generative AI, so attempting to ban it is futile. Instead, it is crucial to establish policies that protect both organisations and their employees, ensuring responsible and safe usage,” asserts Collard.
Deepfakes – One Of The Most Concerning Uses Of AI
advertisement
A previous KnowBe4 AFRICA survey found that almost half of consumers do not know what deepfakes are, while a recent survey of South African IT managers by KnowBe4 and ITWeb showed that 60 per cent of organisations do not provide training on deepfakes. “Scammers can trick people into believing that their loved ones are being held hostage by using fake voice and video messages created with this technology,” explains Collard. Because they are so convincing, many people are falling for these scams, including organisations which stand to lose both money and their sensitive data.
With elections coming up in South Africa and the US, the World Economic Forum has ranked disinformation as one of the top risks for this year. Previous research has shown deepfakes as one of the most worrying uses of AI particularly when used for political manipulation. “A zero-trust mindset needs to be cultivated to help people overcome the threats of malicious use of generative AI,” concludes Collard. “Companies should provide more training and implement comprehensive policies to help their employees navigate this exciting and daunting new technology.”