This is the second project of the course "pattern classification and machine learning". It consists of 6'000 images which contains different kind of objects : horse, car, airplane or anything else. The goal of the project is to build two classifiers : the first one is a binary classifier where the first class are images with either a horse, a car or an airplane and the second class other objects. The second is a multi-class classifier where the images have to be classified either in the class horse, car, airplane or other. The prediction was done for 15’000 images with an error of 8%. Best models we obtained used neural networks and SVM.
In machine learning, the features are the most crucial part in order to have a good model. We have used two features : histogram of Oriented gradients (HOD) and OverFEAT ImageNet CNN features. HOG is very used in computer vision in order to detect objects. It decomposes the images into several bins and for each of those, an histogram of the orientation of the gradients is computed using theirs angles and are weighted by theirs magnitudes. The OverFEAT ImageNet CNN features are also very interesting because they are extracted from a convolutional network-based image features extractor OverFeat. Those were trained on the ImageNet dataset (tens of million images).
Curse of dimensionality
In this project 6'000 images were given and we got 42'273 features overall. We are clearly faced to the problem where we have too many features compared to the number of samples. Another problem is that training might be very slow, which could be a problem when we have to create several models, train them and compared their efficiency. One possible solution to this problem is to project data on a sub-space of the original space. In our case, we have used principal components analysis (PCA) in order to find the best linear approximation of dimension M (dimension of the subspace) of the data.
Simple models such as logistic regression weren't enough powerful to obtain a good balanced error rate (BER). We have tested decision trees, random forests, artificial neural networks, adaboosting, support vector machine and some more. Finally, neural networks and support vector machine seem to have the best BER among all our models.