advertisement
Is the world ready for the deepfakes technology?
Technology has impacted lives in so many ways. There is new technology that’s being invented every day to make lives…
Technology has impacted lives in so many ways. There is new technology that’s being invented every day to make lives easier and better, but sometimes it tends to impact negatively. Such is the double-sides of this enabler- technology.
But as inventions continue to buzz this space with excitement and everyday newness, the consequences continue to be hit just as much and the insatiable appetite for innovations in the 21st century continues to propel wide innovations, benefiting and victimizing the supposed end users in equal measures.
Recently the world of apps welcomed the launch of am ‘amazing app’, whose reason for creation seemed all orchestrated for malice.
advertisement
Dubbed DeepNude, the app uses artificial intelligence to create naked images from pictures of clothed women only – it doesn’t work with men.
The app would soon cause widespread outrage forcing the anonymous team behind it to make it unavailable for download four days after its official launch. As a representation from the creation team said, “The world is not ready for DeepNude,”
I agree that the world may not have been praepared for it or was it just unable to understand its intent? To this moment I have’nt thought of the positive ways with which DeepNude could be used, constantly pondering of its use in blackmail, extortion, damaging careers and ruining lives amidst many other possible spoils instead.
advertisement
DeepNude, even after its withdrawal got complimented by deepfakess- a category of computer-generated imagery that spells even greater potential for harm.
Deepfakes represent a different even more malicious kind of facial recognition than traditional facial recognition that already plays a growing role in life. It’s the technology that helps in finding all the snapshots of a specific friend say in Google Photos and could just scan a face at the airport or at a concert without your consent.
In recent months, deepfakes potency has caused almost philosophical angst owing its video techniques’ ability to put words into the mouths of celebrities, politicians and even members of the public. Ben Sasse, a US senator, has described the potential of deepfakes to undermine democracy as something that “keeps the intelligence community up at night”, and it’s clearly a powerful weapon in the ongoing misinformation war.
advertisement
Did scientists not foresee any of this while they worked so hard on artificial intelligence?
In the summer of 2017, a paper from the University of Washington entitled Synthesising Obama: Learning Lip Sync from Audio described in great detail the procedures involved in creating realistic fake video and audio of the former US president. As a technological and intellectual exercise it was a formidable achievement, but the paper gave no details of what the applications of the experiment might be. Last December, the AI Now Institute, which studies the social implications of AI, warned of the unforeseen consequences of AI scientists performing seemingly benign investigations.
As fears of increasingly accurate deepfakes grow, there’s been much speculation over how they could be identified, labelled or blocked. Perhaps a system based on blockchain could be introduced, in which any piece of data or media could be indisputably verified as existing on a certain date, or a camera could be used that can place indelible watermarks in the code of each digital image. But these suggestions are up against two significant problems.
The first is that deepfake technology depends on Generative Adversarial Networks (GNAs). That involves a rather simple process in which a computer generates fake media and another rates its efforts as real or fake. By playing the two systems against each other at great speeds, the machines get better at detecting fakes, but also better at fooling the detectors.
Secondly, given the nature of this process, many people don’t trust in the possibility of creating a system that will detect fakes for very long because the many media generating machines, can always improve their tact. But as deepfakes get more convincing, the urge to share online could become more damaging.
In June 2019, a group of Russian researchers at the Skolkovo Institute of Science and Technology developed a system that could create convincing fakes with only a few image. The researchers, working on the Synthesising Obama paper, had many hours of material to sift through.
Two months ago, a doctored video of the speaker of the US House of Representatives, Nancy Pelosi, speaking with a slurred voice, went viral across the internet. This kind of simple doctoring could inflict great damage only to people’s reputations but also geopolitically especially if the fakes become more convincing as to compromise viewers’ judgment, even urging for sharing such media online.
Failure to develop resistance or a cushion against deepfakes will lead to the attention economy by social media where even the least convincing fakes have high currency. Even now, people engage with fake tweets, that can enable for deepfakes to happen on the margins, of course with occasional shocks for people.
Deeply thinking about its personal threat, deepfakes particularly to women, a combination of grudges and readily available technology could produce unpleasant fake evidence that women spend their lives trying to deny and rebut.
Deepfakes may have a quieter, more profound impact on what we are prepared to believe in especially in an era where valid criticism can rebound harmlessly off public figures. This technology will malicious people a plausible reason to deny any accusations made against them.