My personal notes on “Machine Learning: An Algorithmic Perspective”

Since Machine Learning and, in general, Artificial Intelligence became my favorite subject, I have spent a lot of time learning about it by myself. I consider that a good, readable and pleasant book can be key in learning about such difficult topics. I have to say that I started reading which probably is the most famous book in this issue: Machine Learning and Pattern Recognition (Bishop), but I failed at finishing it. Plenty of maths, abstract concepts and no real examples make this book very tough, in my opinion. I think this book collects and admirable amount of work and may serve as a great reference, but in order to fully comprehend and learn from an undergraduate background, it can be hard.

Because of the lack of examples, I tried to find a book with examples in some programming language, as Machine Learning: An algorithmic Perspective does. However, I didn’t like it very much neither, because it uses Python classes that you cannot see them from inside. A quite understandable argument is that you don’t really need to understand how it compeltely works from inside in order to understand the general idea of some algorithms, but I think that the only way to understand how an algorithm works is by programming it.

I tried to read each chapter of this book carefully and spending a lot of time researching, reading papers, watching Youtube videos to help me to understand concepts and of course writting down some notes. I have decided to publish all the stuff I’m doing, from personal written notes to source code because it may help more people who are reading this book or who simply try to understand specific concepts.

As I don’t want to write a very long post about it, this entry will serve as an index and for each chapter I consider interesting, I will write a post uploading my own stuff and giving some explanations.

Machine Learning: An Algorithmic Perspective
1.-Introduction
2.-Linear discriminants
3.-The Multi-Layer Perceptron
4.-Radial Basis Functions and Splines
5.-Support Vector Machines
6.-Learning with Trees
7.-Decision by Commitee: Ensemble Learning

2.-Linear discriminants

Downloads: perceptron.pdf, activation.m, NN2outputs.m, NNand.m
Contents: Transfer function, Why bias is needed?, Learning rate, examples, Matlab code.

3.-The Multi-Layer Perceptron

Downloads: MLP.pdf, NN231.m
Contents: Backpropagation, Multi-Layer Perceptron
Further study: Create a general algorithm to use a NN with N inputs, M layers, P nodes, Q outputs.

4.-Radial Basis Functions and Splines

Downloads: kmeans.pdf, kmeans.m
Contents: K-Means algorithm
Further study: Go back to this section in a future to better understand it. Study more about splines.

5.-Support Vector Machines

Downloads: SVM.pdf
Contents: SVM algorithm explained, Non-linear SVM, several examples
Further study: Implement this efficiently in Matlab, learn more about Non-linear SVM, learn how to find support vectors among all others automatically

6.-Learning with Trees

Downloads: Trees.pdf
Contents: Example of Decision Tree (classification)
Further study: Learn/implement C5.0 algorithm, and CART (for regression).

7.-Decision by Commitee: Ensemble Learning

Downloads: Adaboost.pdf
Content: Adaboost formulae and 2 examples
Further study: implement Adaboost in Matlab, example of bagging and Mixture of experts

Leave a Reply

Your email address will not be published. Required fields are marked *