advertisement
12 Dark Secrets Of ArtificiaI Intelligence
Humanity has always dreamed of some omniscient, omnipotent genie that can shoulder its workloads. Now, thanks to the hard work…
Humanity has always dreamed of some omniscient, omnipotent genie that can shoulder its workloads. Now, thanks to the hard work of computer scientists in the labs, we have our answer in artificial intelligence, which if you buy into the hype can do just about anything your company needs doing — at least some of it, some of the time.
Yes, AI innovations are amazing. Virtual helpers like Siri, Alexa, or Google Assistant would seem magical to a time traveller from as recently as 10 to 15 years ago. Your word is their command, and unlike voice recognition tools from the 1990s, they often come up with the right answer — if you avoid curveball questions like asking how many angels can dance on the head of a pin.
But for all of their magic, AIs are still reliant on computer programming and that means they suffer from all of the limitations that hold back the more pedestrian code such as spreadsheets or word processors. They do a better job juggling the statistical vagaries of the world, but ultimately, they’re still just computers that make decisions by computing a function and determining whether some number is bigger or smaller than a threshold. Underneath all of the clever mystery and sophisticated algorithms is a set of transistors implementing an IF-THEN decision.
advertisement
Can we live with this? Do we have any choice? With the drumbeat for AI across all industries only getting louder, we must begin to learn to live with the following dark secrets of artificial intelligence.
Much of what you find with AI is obvious
The toughest job for an AI scientist is telling the boss that the AI has discovered what everyone already knew. Perhaps it examined 10 billion photographs and discovered the sky is blue. But if you forgot to put night-time photos in the training set, it won’t realize that it gets dark at night.
But how can an AI avoid the obvious conclusions? The strongest signals in the data will be obvious to anyone working in the trenches and they’ll also be obvious to the computer algorithms digging through the numbers. They’ll be the first answer that the retriever will bring back and drop at your feet. At least the algorithms won’t expect a treat.
advertisement
Exploiting nuanced AI insights may not be worth it
Of course, good AIs also lock on to small differences when the data is precise. But using these small insights can require deep strategic shifts to the company’s workflow. Some of the subtle distinctions will be too subtle to be worth chasing. And computers will still obsess over them. The problem is that big signals are obvious and small signals may yield small or even nonexistent gains.
Mysterious computers are more threatening
While early researchers hoped that the mathematical approach of a computer algorithm would lend an air of respectability to the final decision, many people in the world aren’t willing to surrender to the god of logic. If anything, the complexity and mystery of AI make it easier for anyone unhappy with the answer to attack the process. Was the algorithm biased? The more mystery and complexity under the hood, the more reasons for the world to be suspicious and angry.
AI is mainly curve fitting
Scientists have been plotting some noisy data and drawing lines through the points for hundreds of years. Many of the AI algorithms at the core of machine learning algorithms do just that. They take some data and draw a line through them. Much of the advancement has come from finding ways to break the problem into thousands, millions, or maybe even billions of little problems and then drawing lines through all of them. It’s not magic; it’s just an assembly line for how we’ve been doing science for centuries. People who don’t like AI and find it easy to poke holes in its decisions focus on the fact that there’s often no deep theory or philosophical scaffolding to lend credibility to the answer. It’s just a guesstimate for the slope of some line.
advertisement
Gathering data is the real job
Everyone who’s started studying data science begins to realize that there’s not much time for science because finding the data is the real job. AI is a close cousin to data science and it has the same challenges. It’s 0.01% inspiration and 99.99% perspiring over file formats, missing data fields, and character codes.
You need massive data to reach deeper conclusions
Some answers are easy to find, but deeper, more complex answers often require more and more data. Sometimes the amount of data will rise exponentially. AI can leave you with an insatiable appetite for more and more bits.
You’re stuck with the biases of your data
Just like the inhabitants of Plato’s Cave, we’re all limited by what we can see and perceive. AIs are no different. They’re explicitly limited by their training set. If there are biases in the data — and there will be some — the AI will inherit them. If there are holes in the data, there will be holes in the AI’s understanding of the world.
AI is a black hole for electricity
Most good games have a final level or an ultimate goal. AIs, though, can keep getting more and more complex. As long as you’re willing to pay the electricity bill, they’ll keep churning out more complex models with more nodes, more levels, and more internal state. Maybe this extra complexity will be enough to make the model truly useful. Maybe some emergent sentient behaviour will come out of the next run. But maybe we’ll need an even larger collection of GPUs running through the night to really capture the effect.
Explainable AI is just another turtle
AI researchers have been devoting more time of late trying to explain just what the AI is doing. We can dig into the data and discover that the trained model relies heavily on these parameters that come from a particular corner of the data set. Often, though, the explanations are like those offered by magicians who explain one trick by performing another. Answering the question why is surprisingly hard. You can look at the simplest linear models and stare at the parameters, but often you’ll be left scratching your head. If the model says to multiply the number of miles driven each year by a factor of 0.043255, you might wonder why not 0.043256 or 0.7, or maybe something outrageously different like 411 or 10 billion. Once you’re using a continuum, all of the numbers along the axis might be right.
It’s like the old model where the Earth was just sitting on a giant Turtle. And where did this turtle stand? On the back of another Turtle. And where does the next stand? It’s turtles all the way down.
Trying to be fair is a challenge
You could leave height out of the training set, but the odds are pretty good that your AI program will find some other proxy to flag the taller people and choose them for your basketball squad. Maybe it will be shoe size. Or perhaps reach. People have dreamed that asking a neutral AI to make an unbiased decision would make the world a fairer place, but sometimes the issues are deeply embedded in reality and the algorithms can’t do any better.
Sometimes the fixes are even worse
Is forcing an AI to be fair any real solution? Some try to insist that AIs generate results with certain preordained percentages. They put their thumb on the scale and rewrite the algorithms to change the output. But then people start to wonder why we bother with any training or data analysis if you’ve already decided the answer you want.
Humans are the real problem
We’re generally happy with AIs when the stakes are low. If you’ve got 10 million pictures to sort, you’re going to be happy if some AI will generate reasonably accurate results most of the time. Sure, there may be issues and mistakes. Some of the glitches might even reflect deep problems with the AI’s biases, issues that might be worthy of a 200-page hairsplitting thesis.
But the AIs aren’t the problem. They will do what they’re told. If they get fussy and start generating error messages, we can hide those messages. If the training set doesn’t generate perfect results, we can put aside the whining result asking for more data. If the accuracy isn’t as high as possible, we can just file that result away. The AIs will go back to work and do the best they can.
Humans, though, are a completely different animal. The AIs are their tools and the humans will be the ones who want to use them to find an advantage and profit from it. Some of these plans will be relatively innocent, but some will be driven by secret malice aforethought. Many times, when we run into a bad AI, it’s because it’s the puppet on the string for some human that’s profiting from the bad behaviour.
Informative!