Augmented Intelligence – No artificial flavours added to your AI
When our ancestors came up with tools and techniques to cultivate the earth, humanity experienced an Agricultural Revolution. When Watt invented the steam engine, he laid the ground for the Industrial Revolution. When personal computers and smart-phones became a part of our everyday life, the Digital Age began. Society has experienced continuous change due to technological inventions and has mostly benefited from it, at least retrospectively. So, why is Artificial Intelligence, arguably the most powerful tool in our technological quiver today, viewed with mistrust?
With market experts such as Elon Musk proclaiming that “AI is our biggest existential threat” and scientists like Stephen Hawking warning that “The development of full artificial intelligence could spell the end of the human race”, it is not hard to sense the negative aura surrounding AI. Concerns are usually associated with the fear that AI can replace humans in most tasks and, thus, lead to high unemployment rates, or even worse, the complete obliteration of the human race. Admittedly, there is something eerily gloomy about the word “artificial”. When AI was baptised, a significant distinction was made between machine intelligence and human intelligence. Machines were meant to be inspired, mimic, and, ultimately surpass humans. But does this agree with today’s picture of AI?
A focus on collaboration
The notion of intelligence augmentation was probably first coined by Licklider. As back as the 60s, Licklider claimed that the purpose of AI is not to replace humans, but to collaborate with them so that humans can perform intellectual operations more effectively. Today, this need for collaboration is more profound than ever.
The huge successes of deep learning and big data have helped us automate the process of data analysis and pattern recognition to a level of perfection. But these examples can be misleading. Neural networks, although inspired from a brain analogy, function in a very different way from human brains. They are powerful statistical models that can approximate complex relationships, while the process of “learning” refers to their ability of finding the parameters of these models by processing data, without needing our help. These networks are surprisingly good at beating chess world-champions, but they blatantly fail in tasks that 5-year-old children can do, such as recognizing and picking up objects or learning a new language.
On the other hand, humans lack the efficiency of machines and their data processing ability is limited. What humans possess, however, is intuition and conceptual understanding, which are essential when applying AI in real problems. For this reason, industry disruptors, such as IBM, believe that what we have so far been calling AI should not be regarded as something more than an augmentation of our existing human intelligence.
Augmenting the healthcare industry
Technological advancements have always found a fertile ground in the healthcare industry, as the need for cost reduction, organizational efficiency and automation is profound in this sector. Increased and reliant connectivity, achieved by advancements in communication technology and infrastructure, have enabled unconventional medical fields, such as telepsychiatry and telemedicine. Now, the advanced data analysis techniques of today’s AI can assist physicians in decision making related to diagnosis or treatment, with computers such as IBM’s Watson already being leveraged by organizations on a global scale.
Augmenting business
Transforming the business model for AI won’t be a choice, but a necessity, brought about by the extensive digitalization of the market and the monetization of data. Marketing policies have evolved from brute-force to cognitive, where machine learning techniques are employed to measure customer engagement, observe product trends and design policies that offer a more personalized and fulfilling customer experience. Furthermore, business intelligence is already benefiting from the AI-powered technologies of big data analysis, predictive analytics and advanced data visualization.
Although this evolution should be viewed as an augmentation of the existing abilities of human resources, a certain degree of digital readiness is required, as the benefits and caveats of AI should be understood by all levels of the business hierarchy. As AI and market insiders Erik Brynjolfsson and Andrew McAfee warned, “Over the next decade, AI won’t replace managers, but managers who use AI will replace those who don’t.”
Human and machine intelligence are coevolving
Discussions around AI are still shrouded by a science-fiction veil, but augmented intelligence is already here. Naturally, machine learning research will continue evolving, but a focus on its human-centric potential is essential for its success. Although AI has departed from its origins in neuroscience, we expect that future AI applications can improve by imitating aspects of human intelligence, such as understanding general concepts, an ability that the area of transfer learning is today focusing on.
As our history of technological advancements confirms, change is inevitable. But, when viewed as an opportunity to augment our skills and improve our standard of living, change does not necessarily sound bad.
“The computer is a bicycle for the mind.” (Steve Jobs)