Last week, I happened to speak to a paralegal. When I told the elderly lady about my job at Luminovo, she said that she feels sorry for her colleagues because they are soon going to lose their job since intelligent computers are taking over. I was shocked. Is that how the broader public thinks about AI, a digital monster that will eventually eat up all jobs? At Luminovo, we are convinced that AI will primarily complement human work by automating repetitive tasks, rather than substituting it. Discover why you should see AI as your future best friend and what it looks like beyond the hype.
You might have experienced the same mixed feelings about AI wavering between fear, fascination, and hope. Often, it is portrayed as the beginning of a frightening apocalyptic age when robots will rule over the human race and science-fiction nightmares come true. In the same vein, for many people, AI is that scary digital monster that will massively eat up jobs and will lead to a jobless future. In contrast, AI is seen as the vital component for stunning technological advancements and innovations such as self-driving cars, autonomous drones, or smart personal agents. It is strongly tied to a feeling of hope that AI will be the all-round solution for the most challenging problems of our time. Magical expectations on AI have to lead to a “technotopia”, a hype where nothing seems to be impossible. These extreme feelings and conceptions are fueled by bold, sticky headlines that virally spread in social media. However, they often do not reflect reality. We get enchanted by new AI feats, but we ignore that these showcases are mostly demonstrated in specific test settings and yet not ready to work in practice.
Practitioners’ AI conception stands in stark contrast with the one drawn by media. Looking at real-world AI problems means putting off that media-tinted glasses and recognizing that AI has great potential to transform whole industries but also that it will be a bumpy path to get there. For one, many AI problems can be solved by good old proven statistical algorithms. This statement is especially true for structured data (e.g., SQL tables). Prof. Robin Hanson, therefore, phrased the viral quote “most firms that think they want advanced AI/ML really just need linear regression on cleaned-up data”.
Second, there is no such thing as a free lunch. What reminds most people of finance theory is a famous theorem in machine learning. At its very core, it states that no single algorithm fits all problems. Each AI problem requires a particular algorithm or architecture that must be individually trained. When we speak, for instance, about IBM Watson being able to beat the best “Jeopardy!” player in the world and giving cancer treatment recommendations, we talk about two very different algorithms that were individually trained.
So far, nobody knows how to give a computer system the mental ability to reason and learn like a human being. Explain to a child what a drinking glass looks like, and it will be able to infer that a stein serves the same function. Computer vision algorithms still are not able to make such easy inferences, also called “zero shot learning”. The fact that the colors of a stein and glass are so different can be a serious problem for the object detection algorithm, although the shapes might be very similar. In a recent blog post, Luminovo founder Sebastian Schaal explained that the arrival of an artificial general intelligence lies — from a feasibility point of view — “somewhere between colonizing Mars and teleporting”.
Seeing that AI is not going to steal your lunch money I want to highlight three reasons why AI is going to complement human work instead of replacing it.
Substitution only takes places where there is direct competition. With regards to AI, however, men and machine are good at fundamentally different things. When we watch a crime movie, it is easy to guess who could have committed the crime. For one, because we excel in making complex conclusions but also because we use our ingenuity and imagination. For machines, complex logical inferences are extremely hard. Instead, they excel at processing large amounts of data very fast. Take object detection as an example: Recognizing a dog on a picture is an intellectually easy task that can be done by every four-year-old child. Processing millions of images at the same time is only possible using computers.
2. Most jobs require coordination
Typical white-collar jobs comprise a wide range of different tasks. An employee must coordinate these tasks and decide what to do when. We will see AI automating specific tasks but in most cases, not whole jobs. With AI taking over tedious, repetitive tasks employees can focus on more exciting tasks where their creativity and cognitive capabilities are required. By complementing jobs, AI will break a fundamental trade-off: pushing productivity and increasing job satisfaction at the same time. Moreover, many jobs are all about human relationships and interaction. The primary task of a lawyer is, e.g., to negotiate and settle arguments for their clients, not sifting through thousands of documents to find one that has merit to the case.
3. Missing accuracy
In many instances, AI algorithms are not accurate enough to fully automate a specific task. Take imaged based diagnosis as an example: A computer vision algorithm can only give suggestions. A doctor always has to revise the proposal since a wrong diagnosis could be fatal for the patient. However, AI can highlight the information that is relevant for the diagnosis enabling the doctor to be more productive.
We should stop worrying about losing our jobs or an impending “robocalypse”. Instead, we should think about how we can foster the human-machine relationship and figure out where else AI could bring its strengths into play. A new way to strengthen the collaboration between machines and humans are Human in the loop (HITL) platforms which we will introduce in one of our next blog posts. So stay tuned!
Sebastian graduated top of his class with an MSc in electrical engineering from TU Munich and the CDTM. During his second MSc at Stanford he focused on management science and machine learning. He worked as a consultant at McKinsey before returning to engineering at Intel and deep tech startups.