Source: Gartner Inc, 2019
The hyping, and over-hyping of new technologies is certainly not a new phenomenon, but the rate of rising and falling has been accelerated to a pace that makes it hard to distinguish value from trend. Many original technical terms have gotten multiple, more marketable synonyms, and it is up to companies to avoid the risk of falling for empty trends cloaked in some new buzzword while dealing with the pressure of staying ahead of the competition in terms of innovation.
Source: Arvind Narayanan on Twitter
Of particular significance are “intelligent” (or AI-based) technologies, with many case studies supporting the profitability of integrating artificial intelligence into business systems. However, uninformed investing into “AI” in general, as it is sometimes seen with companies scrambling to keep up, or trying to force through automation with the main aim of cutting as many jobs as possible, can be absolutely detrimental. This may seem self-evident, however, there are enough incidents of such failed digitalization approaches to legitimize discussion of the topic. Across industries, there is currently particular interest in automation, a topic which combines three buzzwords I would like to unpack: Robotic Process Automation (RPA), Business Process Management (BPM), and Artificial Intelligence (AI). Trying to implement artificial intelligence into business processes inarguably holds great profit potential, but does not come easy to companies. A FIS study by McKinsey on Intelligent Process Automation found that 99.6% of insurance companies were struggling with digital innovation, while 80% recognized the need to keep up on this front. The core question here seems to be: What does a company need to know and consider in order to use intelligent technologies intelligently?
Before we address this question, let’s briefly look at the relevant terms. The umbrella term “Intelligent Technologies” often refers to the combination of three keywords: RPA, BPM, and AI. Robotic Process Automation is based on the automated processing of structured business processes by digital software robots. Essentially, their task is to imitate a human interaction with the user interface of a software system. This could be: reading numerical data from a PDF file and transferring it to an Excel spreadsheet or automatically sending personalized purchase confirmation emails with an attached invoice. Business Process Management deals with the design and improvement of business processes towards increased efficiency. More concretely, this includes the collection of data on internal business processes, the resulting quantitative analysis of their efficiency, and the implementation of improvements based on this analysis.
Software that documents internal business processes and virtual robots that control data entry or automated email — these applications are undoubtedly practical, but hardly “rocket science”. Do these applications already justify the hype around RPA and BPM?
Both are not novel concepts: The 90s already experienced a hype around “Straight Through Processing”, now used mainly for automatic processing of financial transactions, which is based on similar concepts as RPA and BPM. What is new in this context, however, is the commercial use of artificial intelligence. The term AI generally describes systems that can perform tasks that previously required human intelligence. Machine Learning is a concrete subdiscipline within artificial intelligence that underlies most “AI”-powered systems. Conceptually, ML techniques allow a computer to internalize concepts contained in training datasets in order to make predictions for new input data. A classical example is predicting housing prices based on given features such as square meters, location, etc. Conventional ML techniques require structured data, i.e. in an ordered format such as Excel or SQL. However, within ML, Deep Learning describes a class of algorithmic architectures which by design have the ability to automatically extract features relevant for classification from unstructured data. Thus, DL enables the analysis of image, speech, and video data, which makes up most of the content captured by the sensors around us.
Leveraging machine learning to improve automation
The combination of RPA and BPM strategies with artificial intelligence techniques enables Intelligent Process Automation. The main selling point of this combination approach is sustainable profit through digitization. Restructuring processes in terms of efficiency alone, or handing over easily scriptable tasks to software robots may be profitable in the short term, but if we integrate intelligent feedback processes and data analysis into businesses, potential gains tend to be higher. A prime example is Google, which reduced energy consumption within their datacentres by up to 40% in 2016. This was made possible by the use of deep learning algorithms, which accurately produced multidimensional predictions for energy usage by taking into account hundreds of factors and their interactions. So the first answer to our introductory question is: Integrating AI is the key to fully exploiting the profitability of more conventional automation methods. Using the human body as an analogy, software robots in RPA act as arms and legs (physically executing tasks according to a script), while the AI component represents the brain (designing the script through a learning process). This pertains within the narrower context of automatable tasks, which brings us to the second point…
Human and artificial intelligence are complementary
Excessive automation and resulting losses, such as in the case of Tesla, show how important it is to remember that employees remain the actual brain of the operation. Automation is valuable because it takes on repetitive tasks, not because it replaces employees. Why will it stay that way? Human and artificial intelligence are fundamentally different. Intelligence is essentially successful problem-solving behavior. However, different intelligent systems can have different strengths and weaknesses in relation to the type of problem that needs to be solved. Artificial intelligence is more powerful in net speed and computational efficiency, for one because purely electronic systems are not subjected to biological constraints. However, it is also limited by functional specificity, i.e., adapted to a problem type, such as object recognition, and thus differs categorically from our broadly applicable human intelligence. The difference between “narrow” and “general” intelligence is a qualitative one, which is why there is no predictable linear transition from the previous to the latter. Our brain is highly interconnected (to the extent that no higher cognitive function can be reduced to only one brain area) and allows for creativity as well as adaptability when confronted with novel problems — more on the intrinsic differences between brains and computers here. AI is powerful and efficient in processing vast amounts of data and taking into account all relationships between variables to form a prediction that exceeds explanations in human terms (also known as the problem of interpretability). These seemingly opposed features favor practical approaches that combine the respective strengths. James DiCarlo is an American Neuroscientist at MIT who explores the qualitative differences between human and artificial neural networks. In a 2018 study, he compares object recognition in human primates to a deep learning algorithm class called CNNs (Convolutional Neural Networks). He found that the majority of test images were similarly well classified by both systems, but a small number of “critical” images were categorically better detected by primates, who share the core structure of our visual system.
Figure from DiCarlo showing how our ventral stream, in contrast to feedforward deep CNNs, uses recurrent feedback (red arrows) to enhance processing.
The neural layers that enabled superior object recognition are characterized by many recurrent feedback processes, which improves accuracy in difficult or uncertain situations. However, CNNs are much faster in processing, and thus better suited for classifying standard cases, also because performance accuracy in biological systems is not a constant, but subject to increasing fatigue and fluctuations in attention. These opposite strengths can be exploited by using AI systems that classify the majority of the data and pass critical cases on to the human employee.
Image adapted after: Robotic Process Automation. W. Aalst, M. Bichler, A. Heinzl. 2018, Springer
Enable your systems to continuously improve
To sum up so far, Integrating intelligent technology into automation systems drastically increases value; and artificial and human intelligence are characterized by having opposed strengths, which should be exploited to create optimally efficient workflows. How can this further be improved? This brings me to my last point: Sustainability through continuously learning systems. A deep learning algorithm may categorically fail to classify 5% of cases and hand these over to employees. If we enable communication between the two intelligent systems, i.e., feeding corrective human input back into the AI component, the algorithm can continuously learn and thus become increasingly autonomous, without taking months of engineering efforts before deployment. Such continuously learning “hybrid” systems combine rapid practical value with undiminished quality, which abrupt full automation only does to a limited extent.
The fundamental digitalization of business processes is indispensable for companies that have to assert themselves in a competitive landscape. The considerations mentioned above should help ensure successful implementation, the main points being: 1) The integration of AI in traditional automation systems; 2) These intelligent solutions are of most value when designed in a way that allows for feedback between user and algorithm to allow for continuous improvement; 3) Appreciating the complementary strengths of human intelligence, which has special value in solving higher-level conceptual problems and artificial intelligence, which is powerful and highly accurate in handling more narrowly defined cases; and 4) Concluding from this that the opposing strengths of human and artificial intelligence combined represent the work setting of the future.
Arianna is currently completing her BSc in Neuroscience at the University of St Andrews, with a focus on Computational Neuroscience. Previously, she has done research at Tuebingen University, St Andrews Perception Lab, and Harvard Medical School. Her primary interest lies in computer vision.