Hybrid Intelligence – The Symbiotic Relationship Between Human and Artificial Intelligence

Artificial intelligence (AI) has taken the worlds of science and technology by storm and is being applied in every sector, every industry, every discipline. As AI permeates more and more aspects of our lives, one truth is becoming increasingly evident – AI cannot, for the most part, stand-alone. Rather, it is those systems, known as Hybrid Intelligence (HI) systems, that combine the very different processing strengths of synthetic intelligence and of the human brain, that are proving to have the greatest potential.


Autonomy ≠ Superiority


AI is everywhere. From virtual assistants that you can ask, in everyday language, to do thousands of different tasks, to chatbots that successfully handle millions of queries every day, and on to more specific functions, such as AI-based systems that help diagnose illnesses and data-entry bots, such as AiDock’s Bailey, that substantially reduce document processing times and costs.


Nowhere is the mad dash after AI more prevalent than in the push towards fully autonomous self-driving cars. Alphabet (Google’s parent company), Intel, Microsoft, Baidu, all of the major automakers, VC’s the world over, governments, and more are pouring billions of dollars into self-driving R&D, and at the core of it all is AI, which will be (and already is) relied upon to evaluate countless variables to make decisions that will get passengers from point A to B without endangering them or other users of the road.


Amongst the leaders of the self-driving pack is Tesla, which has made great strides towards fully autonomous vehicles with its Autopilot and Full Self-Driving (FSD) technologies. However, Tesla (and the others) have a way to go yet, as Tesla itself states (albeit not in bold headlines) when describing its tech on the Autopilot webpage: “Current Autopilot features require active driver supervision and do not make the vehicle autonomous” (i.e. they are hybrid intelligence systems, not autonomous AI).


This fact was sadly brought into the limelight in April of this year when “… two people [in Texas] were killed when a Tesla with no one in the driver’s seat crashed into a tree and burst into flames” (as reported by Houston television station KPRC). Tesla is now facing claims that its marketing is deceptive, leading people to believe that Teslas with Autopilot and FSD are fully autonomous.


Specifically, AI on its own frequently falls short because (amongst others):

  1. AI is wired differently than human intelligence: Our brains and AI use fundamentally different algorithms, where one often excels in ways that the other utterly fails. For instance, machine learning algorithms are better at spotting complex and subtle patterns in massive data sets while the brain can process information efficiently, even when there is noise and uncertainty in the input or under unpredictably changing conditions (Silva G., 2019).

  2. AI is good in narrowly defined domains, not the opposite: A consequence of the above is that, while AI systems do well in closed, well-defined systems, they have difficulty with open-ended domains.

  3. AI doesn’t generalize well: AI systems are typically trained on data sets pertaining to a specific problem space. When asked to deal with a slightly different environment, they balk. They are not good at transferring what they learn from one context to another.

  4. Businesses don’t know how to integrate AI with humans: Companies and organizations often implement AI solutions without considering how they will interact with their human staff. Although not directly a problem with AI itself, this issue can lead to failed implementations, antagonism against automation, and sunk costs.


What all this boils down to is that the omnipotent AIs of science fiction are still far off and that the vast majority of AI systems can benefit from or absolutely require human involvement.


Hybrid - The best of both bits of intelligence

Elaborating upon Moravec’s Paradox (Hans Moravec, 1988), MIT professor Marvin Minsky, added that the most difficult human skills to reverse engineer are those that are unconscious. Such human “skills” include common sense, intuition, decision making under ambiguity, flexible thinking, etc.


Recognizing the comparative advantages of AI and human intelligence, hybrid intelligence (HI) combines them to achieve better results and allow for mutual learning. As we’ve said above in different words, AI shines when conducting specific, well-defined tasks based on particular data types in controlled, data-rich (super-rich) environments. On the other hand, humans have a clear advantage in ambiguous, abstract, data-sparse environments and in regard to expertise and intuition built over time.


Of course, there are many flavors of HI systems, each with different degrees of human involvement and different types of human-machine interaction. In general, the more open the framework for making decisions (less information and greater uncertainty), and the greater the risk associated with the decision (e.g. language translation when negotiating a truce between warring parties), the more human input, participation, and supervision will be called for.


Classifying HI systems – Illustrative examples

In a recent article appearing in the MIT Sloan Management Review, “Designing AI Systems with Human-Machine Teams”, the authors (Maria Jesus Saenz, Elena Revilla, and Cristina Simón) classified HI systems into 4 categories based on the degree of decision making openness and risk:


Machine-Based Al Systems (closed decision framework, low risk) – “Machines perform tasks independently, with humans playing only supervisory foreman roles.”

Example: Inventory bots, that know how to store and retrieve goods with little to no human intervention.


Cyclic Machine-Human Al Systems (open decision framework, low risk) – “Authority cycles back and forth from machine to human. Humans act as coaches for the Al system, enhancing the learning.”

Example: Chatbots. While they have come a long way, the most effective chatbots today are ones that know when to pass the baton to humans and that then learn from what the humans have done.


Sequential Machine-Human Al Systems (closed decision framework, high risk) – Machines perform most of the tasks independently, with humans serving as sentinels.

Example: AiDock’s Cody HS code classification best fits in this category. Incorrect HS codes can lead to severe delays and be very costly, such that the risk is considerable. On the other hand, Cody is highly accurate and, importantly, knows when it needs help when to raise the flag for a human to make the call.


Human-Based Al Systems (open decision framework, high risk) – Humans and machines interact through continuous loops. Humans act as experts and have the final authority.

Example: Identity verification systems (IVS). One IVS configuration that works extremely well has humans verify the identity of anyone the AI system hasn’t seen yet. The AI will learn from human verification and employ new information for future verifications. Humans always have the final say, of course.


From AI revolution to hybrid evolution

While AI has been around for some time already, new applications are being developed on a continual basis, many of them revolutionary, and the biggest revolutions are still ahead of us (think “singularity”). Conversely, as we uncover the limitations and deficiencies inherent in AI, “reintroducing” humans into the mix is a natural progression, or evolution, that leads to more robust, better overall system performance.




8 views0 comments