Artificial Intelligence: A layman’s guide
In 1997, the IBM supercomputer Deep Blue beat the world chess champion Kasparov. In 2011, Watson, an AI-powered computer system also developed by IBM, competed on Jeopardy!, a TV quiz show where participants provide a question in response to general knowledge clues, and won the first place against two former champions. In 2016, Google’s AI computer program AlphaGo won at the ancient Chinese game Go against the world champion Lee Sedol. Before we can appreciate how artificial intelligence (AI) can affect our lives, we have a few questions to answer to ourselves. What is artificial intelligence? Who are the people behind it? And why does it like playing games that much?
Speaking the same language
Artificial intelligence is arguably the most over-used and least understood word of our century. It may be surprising to realise that the biggest quarrels related to AI, among academics, market specialists and journalists, are due to people having different interpretations of what this new technology is. Here are a few fundamental definitions that will help you navigate the AI scenery:
Formally defined as the area of Computer Science that is concerned with intelligence demonstrated by machines, this term leaves a lot of room for interpretation. When it comes to the purpose of AI, there are two major schools: the weak and the strong AI. Weak AI only expects that a computer performs well at a task believed to require intelligence (such as playing chess). In contrast, strong AI, also known as synthetic intelligence, is more concerned with the philosophical question of whether machines can be made to have consciousness or a mind.
Can we achieve AI? Based on the weak definition, yes. Based on the strong definition, how should we know? It’s not like we have a mathematical formula for what consciousness is! What is more, history confirms that when humans understand an AI success, they no longer consider it intelligent. This realization is called the AI effect: “AI is whatever hasn’t been done yet”.
Although unfitting among the other hyped words of this list, statistics is a term that should accompany every effort to explain AI. Being the area of mathematics that is concerned with analysing data to draw properties and patterns, it is the predecessor of most AI techniques.
Machine Learning (ML)
This popular cousin of statistics, is a tool in the quiver of AI. The purpose of Machine Learning (ML) is to learn how to perform tasks by training mathematical models, such as Neural Networks. Training in the AI literature refers to using algorithms that employ statistical inference to discover the properties of a task. Although originating in statistics, ML is today a very wide branch that has evolved its own tools, such as deep learning, and lays a lot of importance on the complexity of algorithms and heuristics. Whereas ML algorithms usually require data sets to learn, there is a branch of it called reinforcement learning (RL). There agents learn by interacting with their environment through the process of trial-and-error. It has been often argued that RL is the AI technique that closest resembles human intelligence.
More of a collection of techniques than a scientific field, this area is our attempt to perform statistics or ML when the quantity and distributed storage of data does not allow for classical processing.
Is it AI autumn yet?
As all human-related phenomena, ΑΙ can be characterized by how much society values it. AI springs refer to periods when AI research and applications are flourishing, which usually leads humans to overestimating its potential. As a rule, AI winters follow, where funding to AI research is cut due to it failing to satisfy the high expectations. These periods are historical phenomena, so they can only be understood ex post. Although we are undoubtedly living in a period of AI successes, they are usually related to games, and not real-life applications such as robotics, due to their simple and intuitive structure. Therefore, it might be useful to take a step back from time to time and ask: are our discoveries progressing faster than our expectations?
Living in the Artificial Intelligence era
AI is an idea powerful enough to inspire prior to being understood. But, before we prophesy against a Matrix-like AI apocalypse where humans are enslaved or a Robotic Utopia where people no longer need to work, we need to first appreciate how AI is already impacting our lives. In case you own a smart-phone, are using social media or have a GPS in your car, you are already participating in the Internet of Things. Your identity and life – or at least a part of them – has been digitalized and belongs to a network with collective intelligence. Stressful, right? Although most of the data related to our online activities are today handled and stored in the databases of large companies, such as Google, Facebook and Amazon, it is believed that in the future this intelligent web will be completely decentralized. For this reason, technologies that deal with security and trust, such as Blockchain, are gaining interest, as the authority provided by companies will need to be replaced by roots of trust and trust anchors.
The democratization of AI
As is the case with all technological advancements, our society can choose to incorporate AI in every day life for the benefit of the people or let it remain an advanced tool for the few. If we observe recent trends, we may conclude that the community is leaning towards the former, with specialists even claiming that AI will become a human right. For example, companies like Facebook are open-sourcing their tools and research groups lay importance on the reproducibility of their ML experiments.
Also, the field of Automated Machine Learning has sprung up, which aims at making the training of ML models a fully automated process. Although not completely mature, this area has revealed the possibility of making AI practise approachable to anyone having access to a computer. Tools, such as Amazon’s Web Services and Microsoft’s Azure, offer serverless computing capabilities, where users can easily set up ML experiments that run on the cloud.
Artificial Intelligence and the job market
In an age when everyone is claiming to be using AI, one has to be extra cautious when choosing their team, as there is a variety of skill sets associated with this area. Advancing the state-of-the-art in ML requires strong research and analytical skills, algorithmic thinking and mathematical background. On the other hand, companies lay great importance on the experience of an ML practitioner in programming frameworks and platforms, whereas businesses are in need of people with a deep understanding of the potential and limitations of these technologies and how they associate with their market.
A fear often expressed related to AI and the job market is that many jobs will become automated and, thus, obsolete. When Andrew Ng, Stanford University Professor and Google Brain co-founder, was once inquired about the power of AI he replied: “Almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI”. Although this is an imperfect rule and does not take into account the experience or talent that some works require, it may be useful to ask yourself: how much mental thought do you put on your work lately?