This blog intends to take on the -arguably quixotic- project of making its way through the huge body of machine learning techniques and frame them as part of a coherent mathematical theory. Our goal is to introduce an intuitive (yet hopefully powerful) of a supervised learner which consists of a slightly more precise formulation of the ones found in the current literature and from there discuss how each technique fits this definition. Along the way, we’ll develop some new constructions of supervised learners coming from topology and measure theory. The project can be summerized into the following steps:

  1. Defining supervised learners
  2. Linear Regression: Linear and Euclidean learners
  3. Gradient descent: Convex Learners
  4. Topological learners: extending the hypthesis space
  5. Neural nets: Neural learners
  6. Statistics and learners: Markov learners