Of Brains and Computers

Why AI isn’t getting more human

Photo by Franck V. on Unsplash

Conversations about AI are difficult. This is partially due to people having different definitions, or not even consistently standing behind one definition, but rather forming their opinion based on some fluctuating intuition of what AI is. For example, reacting to news of an improved image classification algorithm that demonstrates near human-level skill within that task with concerns about AI reaching human-level intelligence or replacing humans is plain wrong. The mention of human-level skill ignites the violent fear of replacement or endangered superiority status while ignoring the fact that high performance within a set task differs fundamentally from general intelligence or emergent consciousness. This article will elaborate on why this fear of replacement is unfounded, preceded by a brief discussion of the common analogy of brains and computers, and concluding that narrow AI is the only AI we can realistically discuss. Machine Learning tools perfectly complement our brains’ strengths and weaknesses, and it is due to these complementary characteristics that AI-Human collaboration is the effective work-setup of the future.

Brains are often compared to computers, and aspects of human cognition are likened to algorithms. Following the rise of AI, such resemblances have been increasingly discussed. In what ways are computers, programs, algorithms similar to brains and neuronal circuits? Virtually in all ways, if you believe John von Neumann, a mathematician who was an avid defender of equating the brain to a computing machine. The posthumous publication of his book “The Computer and the Brain” (1958) preceded the philosophical contention of Multiple Realizability, which gave rise to the idea that cognitive states can be implemented in different physical states. The mathematician Hilary Putnam and cognitive scientist Jerry Fodor compared the mind-brain relationship to software and hardware, and further generalized that the same mental state could be realized by different physical states. The software-hardware analogy starts breaking down if you consider that cognition (the mind) is not some program “run” on the brain: you cannot effectuate a change in cognition without a change in the brain (e.g., temporary synaptic changes). In a way, the mind is an abstraction of the brain, a descriptive model of how functions emerge from the brain’s biological properties, unlike different programs running on a computer (which do not permanently affect the hardware). Nevertheless, there are obvious similarities between the way a brain and a computer transmit information, e.g., both send messages via electrical signals through a seemingly binary system (although neuronal firing is more complex and differentiated). We can find mathematical analogies of cognitive functions and study how the brain solves problems for technical inspiration. After all, it is the product of many million years of evolutionary selection, and the most versatile physical implementation of intelligence that exists.

In a 2017 paper, neuroscientist and DeepMind co-founder Demis Hassabis argues for neuroscience-inspired artificial intelligence, holding that studying brain architecture should be central to AI research as it is the only existing proof that such an intelligence is even possible. Based on Marr’s levels of analysis, we should look at the brain on an algorithmic and computational level, acknowledging the obvious differences in implementation, to gain transferrable insights into how intelligence can be constructed/realized. We recognize biology as a source of inspiration in state-of-the-art convolutional neural networks, which share several characteristics of neural computation like nonlinear transduction, divisive normalization, maximum-based pooling of inputs etc. (Yamins and DiCarlo, 2016). These operations are derived from single-cell recordings of the mammalian visual cortex; and on a higher level, we also find neural network architectures that loosely follow the hierarchical organization of cortical systems. Similarly, biologically inspired computation on multiple levels has been applied to methods in natural language processing (see Hassabis’ mention of Hinton et al 2012). Reinforcement learning, one of the hottest methods in AI research, is also rooted in animal learning (more specifically, in the idea of temporal-difference methods from animal conditioning experiments). Heuristic recourse to the brain and the translation of certain efficient methods of its neural circuits into algorithmic architectures has led to great advances within computer science/AI research. But this does not take away from the fact that artificial intelligence and its foreseeable development are qualitatively drastically different from human intelligence. While we can find or create mathematical analogies of function, the bottom line is that artificial intelligence, in its narrow specificity on the one hand and superhuman accuracy/processing speed on the other, differs categorically from general human intelligence: Brains and AI systems are good at very different things. Some of these differences stem from neurobiological features of the brain. Information processing depends on a range of constraints (e.g. myelination, availability of neurotransmitters, neurotransmitter diffusion time and prior history of neuronal firing) versus the straightforward mechanism of a computer (concrete speed of a microprocessor).

The idea of current state-of-the-art AI naturally progressing to human levels with growing power involves the mistake of assuming that we are moving up a linear spectrum of intelligence from low to high. When talking about the future of AI, we often see imagines of an “intelligence spectrum” like these:

Intelligence spectrum of AI
computer performance vs human performance linear spectrum

The issue with graphs like these is that presenting intelligence as a one-dimensional, linearly increasing characteristic ignores the qualitative differences between types of intelligence. While, within the narrow domains of its particular application, e.g. image classification, “weak” AI is getting more powerful, this advancement will never equal general intelligence, which is qualitatively different. The key distinction, which these graphs ignore, is between strong and weak, or general and narrow AI. Narrow AI systems display intelligence within a particular field and thus perform highly specialized tasks. General AI is the idea of a system that would resemble the intertwined range of cognitive abilities and general understanding of humans; such systems are still the fabric of Sci-Fi movies and are not just “more” or “stronger” narrow AI. The issue arises when we, in response to a new feat of narrow AI, start discussing worries or exaggerated hopes which would only relate to general AI. The underlying assumption of these spectrums is that one form just precedes the other, and high-performing narrow AI should be expected to transition over into generalized human, and then swiftly to super-human levels (a notion maybe supported by the unfortunate terminology of weak and strong: after all, weakness can turn into strength with sufficient training). This is anything but given. Individual ML algorithms can perform specific tasks, e.g., a genetic algorithm predicting the binding affinities of proteins, or a convolutional neural network in face detection. Think of the immense number of tasks our brain can perform within the field of perception alone and take into account the incredible interconnectedness of our cognitive functions – contrary to computer systems, our brain is not modular, i.e., is not an arrangement of separate functional units. A nudity detection algorithm suddenly developing semantic understanding, or the prospect of assembling countless specialized algorithms in a way that approximates the architecture of over 1014 synapses, is not even remotely within the scope of our near future. However, seeing machine learning as a toolbox for automation processes does not take away from the transformative power that these tools hold.

Deep reinforcement learning is revolutionizing novel drug design by generating models of chemical compounds that have a specific desired physical, chemical, bioactive properties (Popova et al, 2018). In 2016, Google reduced its energy consumption for cooling by up to 40% with help of highly accurate deep-learning based multidimensional predictions for power usage effectiveness. These recommendation algorithms using supervised learning are now being replaced with more sophisticated algorithms that also use reinforcement learning to autonomously manage cooling, which promises further environmental and financial benefits (see this 2018 MIT review). These applications are fascinating, powerful, and radically innovative, and also completely dissociated from some progression towards consciousness or general understanding. We must remember that AI is not on a natural progression to human intelligence and that these complementary strengths are innately linked to our structure and material constraints. Efforts to make deep learning algorithms more similar to human cognitive architectures, like DiCarlo’s 2018 CORnet models, or endeavours like Markram’s Human Brain Project, are fascinating, but similarities to our biological neural nets are artificially constructed. AI does not develop like our brain by itself - any similarities must be engineered. In conclusion, AI is not getting more human, and yet it is the most fascinating technology we could ever have dreamed of.

We at Luminovo acknowledge these qualitative differences between human and artificial intelligence, and consequently are developing a hybrid platform that builds on the best of both worlds. With automation of time-consuming screening, monitoring, or data extraction tasks, humans can focus on the most challenging cases and let powerful, continuously improving deep learning models do the rest. This ensures early high-quality results on the one hand, and frees up human capacity for creative and conceptual tasks, in which machines will continue to underperform for the foreseeable future.

On a less serious note…

Meme on journalists using humanoid robot and circuit board brain images in articles about AI

About the Author

Arianna Dorschel

Arianna is currently completing her BSc in Neuroscience at the University of St Andrews, with a focus on Computational Neuroscience. Previously, she has done research at Tuebingen University, St Andrews Perception Lab, and Harvard Medical School. Her primary interest lies in computer vision.


Supercharge your processes

Reach out

Follow us on:

LinkedIn loFacebook LogoMedium LogoMedium Logo
Copyright © 2021 Luminovo GmbH · Made in Munich · Imprint & Privacy Statement