Supervised vs Unsupervised Learning
Complete the form below to unlock access to ALL audio articles.
What is supervised learning? Combined with big data, this machine learning technique has the power to change the world. In this article, we’ll explore the topic of supervised learning, but will first touch on some recent machine learning history.
In 2012 Alex Krizhevsky, a researcher at the University of Toronto, kicked off the third golden age of artificial intelligence. By a large margin, he beat the state-of-the-art in automatic labeling of ImageNet,1 a database of over a million images from 1000 different categories ranging from canoes to cats and frogs to hotdogs. If you’ve ever wondered why artificial intelligence has been all over the news in the last decade the hype began with this breakthrough.
Alex’s novel approach was to parallelize the computation of his neural networks, allowing them to be wider and deeper than ever before.2 But how did he train his network? That’s all down to supervised learning.
What is supervised learning?
(Alex’s network guesses the top five most likely classes for some images in ImageNet)2
This method of learning is intrinsic to all of us. If you’ve ever practiced a language, revised for a math exam or done a pub quiz then you’ve learned under supervision. Imagine holding up an apple to a baby and asking, “What’s this?”. The baby points to the apple and declares “Banana!!”. The baby was close, but there was some error in its prediction. “App-le”, you say. The baby updates its language model, and next time you show the apple it says “Appum!!”. Eventually, that baby will learn to say “Apple”.
This is exactly the update formula used in supervised learning. In short, we are testing our model, the baby, with questions and supervising it with true answers. Or in statistical speak, we fit models to minimize the error between their predictions and the ground truth.
You were posing the baby a classification problem as it needed a categorical response. Other types of questions are concerned with estimating quantities, which we call regression problems. These include guessing the price of a house, someone’s age or the weight of your suitcase.
Let us continue our machine learning story. The year is now 2015 and Kaiming He, a researcher at Microsoft, builds a supervised neural network that, for the first time, surpasses human-level performance in classifying ImageNet.3 Since, focus has been shifting towards unsupervised learning and what we can achieve without labels.
What is unsupervised learning?
Unsupervised learning tackles this seemingly impossible task of learning useful information without any sample-specific prior knowledge. Recall our supervised learning baby. When it was first born it had never seen any objects before and didn’t know a single word. How did it go from knowing nothing about the world to knowing something? A popular term for this kind of problem in computer science is bootstrapping, named because the task is akin to lifting yourself up by your bootstraps.
(Unsupervised clustering on ImageNet.1 Do you agree with the red and green groupings into living and non-living things, or would you have done it differently? Perhaps by color or time of day?)
This is usually achieved by making generic assumptions about the dataset as a whole. Popular ones are:
- Clustering – assuming the data naturally falls into a finite number of distinct groups. We might expect ImageNet’s 1000 classes to divide into 1000 groups. Algorithms that help decide what data should go in what group include centroid-based methods such as k-means and Gaussian mixture models, and graph-based approaches such as spectral clustering.
- Dimensionality reduction – assuming the data can be compressed while preserving data integrity. Everyday algorithms we use are lossy compression formats such as JPEG and MP3. We also use principal component analysis and autoencoders.
- Anomaly detection – expecting that anomalous samples lie outside the distribution of normal ones. By showing our model only normative samples, anomalous ones are flagged by their distance from the normative population. In practice, we assume the normative population follows a Gaussian distribution and define anomalies as lying some number of standard deviations from the mean.
Self-supervised learning
Will machines take over the word?
(Given (b), the model must rearrange the jigsaw pieces to reconstruct (a))4
Justifiably, you may question the usefulness of an AI that solves jigsaw puzzles. But performing a generic task like this requires learning important information about the data. To rearrange the tiger, you have to first learn what one looks like.
Semi-supervised learning: the best of both worlds
When to use supervised vs unsupervised learning
(Labeling issues. Is the first picture really a grille? Is the third picture a dog, or some cherries? How would you label these images?)
Some issues you might encounter are:
- Big data: Assigning a label for every sample in your dataset can be timely and expensive, especially if they require an expert as for medical imaging.
- Multiple classes per sample: Your dataset may require several labels per sample if it belongs to or exhibits multiple classes. Was the third picture above a dog, some cherries, or both?
- Dense labeling: Each dimension of your multivariate data might need a label which can get very expensive. For example, if we are training a network to draw contours around apples. we typically need every pixel to be labeled either as belonging to an apple or the background.
If your data comes pre-packaged with labels, supervised learning is a great place to start. It can allow you to compare performance of different models and provide intuition on how difficult the prediction task is. However, keep in mind that the accuracy of your labels can be in jeopardy from:
- Labeling errors, as systematic bias or variance. In other words, different annotators may not assign the same labels to the same samples. This is called inter-rater agreement and can be alarmingly low. Indeed, labels from the same person aren’t guaranteed to be consistent; it was found judges give lighter sentences after they’ve eaten lunch.
- Categorical representations of continuous variables, where several different levels of a variable are binned to the same discrete value, thus destroying the nuance in the variable.
- Disregarding class relationships. Independent categorical variables ignore class overlap. For example, we know that cats are conceptually closer to dogs than either are to skyscrapers. Yet simple categorical labels will not encode this fact.
Babies do a large amount of learning on their own. At the 2016 NeurIPS conference Yann Lecun, one of the three godfathers of artificial intelligence, said:
“If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning”
Unsupervised deep learning methods have seen significant progress in the last few years, with their performance fast approaching their supervised counterparts on the ImageNet challenge. Once you know the pros and cons of both styles of learning, choosing between unsupervised or supervised, or a mix, is down to you and your dataset.