By Christian Engel
July 19 — The concept of artificial intelligence has provided fodder for science fiction writers for decades. Today, narratives that we will soon see a malevolent super-intelligence like Skynet from “Terminator” have become commonplace. But a dark future with a fight against monsters like the Terminator seems a bit far-fetched. Yet if you’ve been reading up on artificial intelligence recently, you’ve come across plenty of myths almost as scary as the Terminator: All our jobs are going to disappear! Robots are going to take over our business activities! Every company has to purchase AI very soon in order to survive in the market!
AI myths are widespread.
Let’s examine a few of these widespread AI myths.
All of us, in one way or another, are customers of a bank. Let’s say I call the bank or write a message to its customer service department. An AI system may answer my question “independently,” without me knowing that I’m not talking to a human. Is that already happening in the real world? Yes.
There is no question for those well versed in technology that AI will change the way we work and live. But the popular myths encircling AI contain some ominous predictions. The chief information officer or head of analytics must educate the CEO to ensure the bank’s decision makers are not working under false assumptions or myths about the technology.
Myth 1: Organizations need a chief AI officer.
Should businesses believe the claim that if they purchase artificial intelligence software, it will solve all their problems? This is clearly a myth born of industry AI hype. However – what we see is that chief data officers or innovation managers are reporting to the chief risk officer to give advice regarding AI in risk processes. This seems to be a more practical approach than a chief AI officer.
There is no single thing called AI, nor will it solve all business problems. AI is always a bundle of various technologies, modeling approaches and interfaces. Does this mean every business should have an AI strategy or a chief AI officer to identify the best approach?
Let’s look back to the 1990s. Graphic user interfaces were gaining mainstream popularity. They are still advancing and improving today, but nobody had a GUI strategy or a chief GUI officer, and the same is true of AI today.
Myth 2: AI is reality.
What about the belief that artificial intelligence is now a reality? This broad statement may fall into the gray area somewhere between myth and fact. We have to make some distinctions here. Individual features such as speech recognition, for instance with Siri, or learning models for fraud detection are already a reality. But there is no truly independent AI as such.
Consider the different types of intelligence a child develops. These include logical/mathematical intelligence, motor skills, and social and emotional intelligence. Similarly, banks, other institutions and CIOs have to define what AI solutions they need for different types of intelligence.
In fact, each system – such as an autonomous vehicle – only pursues objectives that humans have programmed it to. Similarly, to get meaningful results in forecasting, an asset management approach requires a human to design and input the data and machine learning models upfront.
Myth 3: AI has human characteristics.
That brings us to the next big AI myth. “Artificial intelligence has human characteristics.”
Highly advanced data analytics – the foundation for AI – attempts to predict human behavior as accurately as possible and communicate with users in a way that creates the illusion of a human interaction, but it’s still merely an illusion, sometimes more evident than others. Because, just like with a self-driving car, all AI systems have a data scientist who must make decisions about which modeling approaches to use — deep learning, random forest or logistic regression.
These approaches must then be implemented within the business process in just the right way to ensure that they are viable. And there is one thing that is still indispensable for that: people. There are certainly platform vendors who provide a wide range of developer approaches and preferences about how to integrate them, but the developers are still human beings. And a so-called superintelligence created by AI simply does not yet exist. (Yet researchers such as Stephen Hawking have warned that a super-intelligence may be created in as few as 50 years.)
Myth 4: AI is self-learning.
Another myth that fits perfectly here is that AI is self-learning. In this case as well, the system stays within the limits that its programmers have set for it. It gives its programmers feedback, but the programmer determines what to do with that information. The self-learning aspect doesn’t happen in the AI itself, but rather as part of special mathematical procedures that work iteratively, applying a type of learning algorithm. How does it learn? Via a predefined iterative procedure itself or through data that is constantly reused. But people are the ones who decide what data are used.
Myth 5: AI is a lethal danger to the human race.
Then there is the terrifying myth that brings us back to the Terminator, namely, that artificial intelligence is a lethal danger to humanity. This the central premise of the popular television show “Westworld” where intelligent robots become sentient and then terrorize their human masters.
The capabilities of artificial intelligence may give rise to lots of these fantasies, but we’re a long way away from creating such a scenario. Myth, myth and more myth, but when we hear that futurist and Google engineer Ray Kurzweil predicted self-driving cars by 2015 at the beginning of the millennium – which actually happened – and that Kurzweil has predicted that we’ll be able to extend our human brains into the cloud by 2050, it certainly does grab your attention.
I’m going to ask Siri what she thinks.
Based in Germany, Christian Engel is a business analytics advisor for SAS. His team recently conducted 100 interviews with business leaders to understand the current state of AI readiness in corporations.