Machine Learning – How do machines think?
When Alan Turing coined the Turing test in his paper “Computing machinery and intelligence”, which laid the foundations of artificial intelligence (AI) and machine learning, he was confronted with the following question: “Do machines think?” Alan felt that the question was confusing and ill-posed, so he rephrased it as: “Can machines do what we (as thinking entities) can do?”
Difference of Machine Learning and AI
Unimportant the above event may seem at first glance, it is enough to reveal the difference between AI and machine learning. Whereas Artificial Intelligence is our general attempt to create computer programs that think like humans, machine learning is defined as “the ability of a program to learn without being explicitly programmed”. It is the practical result of our realization that we can make a machine learn how to perform a task by providing it with data and guiding it.
The art of teaching a machine
Human learning is a complex and subjective process and we certainly haven’t found the perfect way to do it yet. So, how did we go about teaching machines?
Machines, or computer programs, can learn from the data we provide them with. These data consist of features and a class attribute or label. Let’s take for example the task of image recognition, where a program is presented with images and must recognize what these images portray. In this task, the features of an image are its pixel values and the class is the label that a human has attached to it, such as “cat” or “house”.
After the data have been prepared, the most important step in machine learning is training of the model. Models are mathematical formulations of what the machine has learned. Artificial neural networks are a popular type of model inspired by the functionality of the human brain. They consist of simple units, called neurons, that are interconnected to form the neural network. We can picture this network as layers of neurons placed one after the other. By this point one must naturally wonder: how can these models recognize cats from simple pixels?
Mimic of human behaviour
Surely, when a person looks at a picture, they don’t “read” the pixel values. Our understanding of image recognition has taught us that humans look for particular attributes in a picture that will lead them to recognising objects. Deep neural networks, which are simply classical artificial neural networks with many layers, are successful due to their ability to mimic this behaviour. For example, a layer could be used to recognize edges in the image, the next one to recognize shapes, such as circles and rectangles, while deeper levels can answer intuitive questions such as: “Is there something fluffy in the picture?” or “Are there any walls?”
The power of machine learning comes from its ability to generalize. This means, that after seeing many pictures of cats, the program manages to learn an abstract notion of cat-iness and can from now on recognize cats it has never seen before. Of course, the model is as good as the data it was trained with and cannot generalize over concepts it has not learned about.
Machine Learning in robotics
When hearing the term machine learning, probably the first thing that comes to mind is a robot holding a book. Although quite far from the actual practise of machine learning, robotics are indeed a significant field that offers a variety of applications. From military oriented drones and swarm robotics, to medical robots, such as Da Vinci for robot-assisted surgery, and autonomous vehicles, these systems can be used in real environments to replace humans in all sorts of tasks.
In contrast to image recognition, which is a supervised learning task due to requiring human-provided data , robotics usually involve reinforcement learning. Here, machines are not presented with labelled data, but learn by interacting with their environment through trial and error. Roboticists believe that this type of machine learning is both the most promising and the hardest one to achieve. Depending on the application, these systems may face difficulties with extreme safety requirements, battery consumption or deployment costs that simple computer programs are not concerned with.
Machine learning and the industry
The applications of machine learning are flourishing, with every company, start-up and research group trying to leverage the opportunities coming from equipping their technology with this new kind of intelligence. However, companies like Google, which formed the AI-centered research team Google Brain in the early 2010s, have recognized the true power of machine learning, which lies not in small innovations, but the ability to disrupt and create markets.
Examples of important markets with AI potential are numerous. The Internet of Things can leverage data from smartphones or wearable devices, such as the Google Glass, to offer personalized user experience. The field of business agility, that aims at helping companies maintaining a competitive edge by quickly adapting to the market’s needs, can reach a whole new level by analysing Big Data and employing predictive analytics.
However, one of the largest and most meaningfully impacted areas could be the healthcare industry. Medical systems equipped with machine learning capabilities can replace every process of the medical pipeline, from data analysis to diagnosis to treatment. Healthcare will be more personalized and more efficient and the operational costs will be lower, especially for the pharmaceutical industry where drug development is at decline. For example, Samsung has already discovered the potential of using smartphones to quickly and easily detect diseases, such as breast cancer and Parkinson.
The price of deep learning
Although an older concept, deep learning conquered the AI scenery in 2016, with researchers realizing that machine learning can prove very powerful when it becomes deeper. But what exactly got deeper? The neural network architectures that machine learning is using. The sophistication of the learning algorithms. Their ability to solve complex problems. And, unfortunately, our inability to understand them.
The more powerful a machine learning model is, the harder it is to explain how it works, you therefore have to blindly trust its outcome. It is not difficult to see the ethical considerations of this. For example, imagine the dangers of an AI-enabled medical system that wrongly predicts cancer.
Risks of automation
In 2014, Amazon discovered that the AI system they were using for evaluating job applications discriminated against women. The discovery sparked concerns among the public and motivated many companies to re-evaluate their hiring processes. But how was this problem caused in the first place?
Although machine learning practitioners can take special care when designing learning algorithms, there is one thing they don’t have control over: data. As we described earlier, most of these algorithms learn based on labelled data, the quality and quantity of which will determine the power of the learned model.
Unfortunately, our world is biased. So is the data we have about it. If Amazon, as well as most technology companies, have so far been hiring primarily men due to the indisputable male dominance in this industry, so will the AI that will be built based on their data. Machine learning algorithms cannot obliterate or create bias. However, the automation that they bring, can lead to perpetuating and increasing it to unprecedented levels.