advertisement
When Truth Fades: Communicating In The Age Of Synthetic Influence
At the Africa Tech Festival in Cape Town, one of the most thought-provoking sessions didn’t dwell on AI’s capabilities but on its consequences, especially for communicators tasked with shaping brand identity and managing public trust. The conversation didn’t fixate on prompt engineering or LLM performance benchmarks. Instead, it confronted the uncomfortable truth: artificial intelligence now determines what stakeholders see, believe, and share. And it’s doing so faster than institutions can respond, fact-check, or repair the damage.
Trust, once a product of time and consistency, is now at the mercy of velocity and volume. AI tools can generate compelling narratives before humans have had a chance to verify their accuracy or intention. This temporal gap between message generation and message validation is where misinformation thrives. And it’s precisely where brand reputations begin to unravel.
The numbers speak volumes. According to the 2024 Edelman Trust Barometer, global trust in information has fallen below fifty percent. That erosion isn’t abstract. It’s visible in every manipulated headline, every synthetic video clip, and every viral half-truth circulating unchecked. It’s compounded by rising expectations. Audiences now demand authenticity, traceability, and accountability, not just from political actors or newsrooms, but from brands and institutions they engage with daily. And as the tools get better, the risks get bigger. AI can supercharge communication, but it can just as easily supercharge distrust.
advertisement
For communicators, the task ahead is clear: it’s no longer about mastering media relations; it’s about building reputation resilience. In an environment where deepfakes and fabricated personas can go viral in seconds, brand narratives must be protected with the same rigor as cybersecurity systems. Content strategy needs a reset. The metric of success can no longer be velocity; it must be integrity.
This shift brings with it a set of hard questions. Every piece of content becomes a brand promise. Every interaction, a reputation asset or liability. Ethical communication can no longer live on the margins of strategy decks. It must be embedded as a core component of corporate policy, not a PR ideal but a boardroom imperative.
And then there’s the minefield. Do we prioritise speed or authenticity? How transparent can we afford to be, especially when the AI models we use are often opaque themselves? What happens when the biggest threat to credibility comes from within, when our own teams misuse AI tools without oversight?
advertisement
The answers lie not in code but in values. AI may automate content, but it cannot replicate judgment. It cannot build trust through years of consistent behaviour. It cannot look someone in the eye and say, “I understand, and I’m going to help.” It cannot interpret nuance or shoulder the weight of responsibility when communication goes wrong.
That’s the communicator’s burden—and their value. In a landscape where the lines between authentic and synthetic are blurring fast, human-led communication is no longer a given. It’s a competitive advantage, and increasingly, the only one that matters.