advertisement
We Can’t Programme It So It Must Be Human Intelligence
It goes work smarter, not harder.
This is probably all we need to know to understand the value of intelligent applications (I-Apps) whose value is now growing exponentially. Years ago, the field of study of Artificial Intelligence (AI) was taking some of its first vital steps from research labs and universities to being embedded in our lives. It’s quite remarkable how AI has been embedded in our lives so discretely we cannot even see it there. Maybe we should blame all the Sci-Fi movies for giving us tunnel vision when it comes to AI.
How often do you interact with AI or Super Intelligence, the latter maybe not at all but I couldn’t resist mentioning it. Let’s do a quick Google search. That simple interaction with an AI that we call a search engine. Rather, to be clearer, that was an interaction with an Intelligent Application.
advertisement
I-Apps are basically applications embedded with AI. The application of I-Apps is already vast. As you are so rightly thinking, they have been among us from our search engines, business applications, entertainment, healthcare and more. Now we see just how much AI is embedded in our lives. AI that is making decisions for us, sometimes in form of recommendations and sometimes rather harmless puff.
The lowest hanging I-Apps fruits tackle automating simple routine tasks that free us for more complex work. The kind that I-Apps are already slowly starting to look at as something they too can do. Let’s look at the high hanging fruits away from process automation and simple chatbots. Now we focus on those I-Apps that make decisions and are data-driven (in some cases). These I-Apps possess the tricky balancing act of giving AI decision-making capabilities while dealing with the risk. This introduces us to one of the most challenging topics. And the impact of getting this right or wrong is immense.
A trip down memory lane should enlighten us. During the second industrial revolution, automation created a division of labour and created more scheduled free time. This brought about changes in human behaviour and interaction. There erupted more and more holidays or leave days to indulge in. We could go further away from the industrial complexes that later became cities. That was the impact of just automation of some manual tasks, yet it greatly transformed society.
advertisement
Fast forward to now. We have decision making as the next step in this change as we welcome the Fourth Industrial Revolution (4IR). I-Apps are being tasked with more complex decisions. Should these decisions be re-examined by people before they are actioned? Some could be. Others require the kind of action we would miss if we blinked. Surely, we cannot freeze time to inspect how AI come to a decision. Even if we could, we still have the black box problem.
The question we are building to is, should AI make moral decisions for humans? This loaded question has been a bone of contention for coders, economists, philosophers, and end-users of I-Apps. One school of thought is that AI is not a moral agent and can’t make moral decisions. That only humans can make moral decisions. This school of thought believes accountability should be with the person(s) who wrote the AI. However, if the AI has been improving on itself, can we truly say the version that makes such a decision is uniquely different from the former? Or do we say the sins of the child are for the parent to bear?
Another route has been the utilitarian approach to AI ethics. That seeks to maximise the benefits of a decision and minimise the risk. This straightforward and simple route has one huge flaw as noticed by research done by the moral machine. The flaw is that it’s a good idea on paper, but most people would not like it applied to them.
advertisement
We could always go the human review approach where the decision is re-examined by humans, as humans possess intuition that ideally cannot be taught to machines. The problem with this approach takes in too many assumptions and misrepresentation of terms. To start off, intuition is nothing more than an accurate guess. Should we leave moral decisions to guesswork?
Anything the laws of physics require an object to do can in principle be achieved (emulated) by a programme. Given enough time. This is the universality of computations. Coupling this with the philosophical school of thought; that there has been no clear definition of the term intelligence, leaves us not with a route for solving the ethical challenge of AI but a when. Simply put, we are yet to figure out how to code moral ethics into machines.
These and other routes to solving the morality issues of AI must be multi-disciplinary. Philosophy is asking the right questions. We want usefulness from AI, and it must be safe. When we go about testing these and other options to the moral dilemma of AI, should we ask at the very point we are looking to prove this theory or disconfirm it? I believe the only genuine test for a theory is one that is attempting to falsify it. And we must all attempt to falsify our theories on AI and morality to find the right choice.