## all about machine learning #

Machine learning is “the study and development of statistical algorithms that can learn from data and generalize to unseen data”. (Yes, I stole that from wikipedia.)

Today, the collective attention^{1} of the field is focused on deep learning — the technique that’s taken the subject by storm ever since results showed that they could be highly-scalable with respect to data and compute. But ML spans far beyond that, and includes classical methods like clustering, tree-based models, and mathematical techniques like integer programming and planning methods (e.g., computational methods based on Bellman’s equation).

This overview introduces a few of the major components underpinning modern ML: linear algebra and probability theory. In the sub-entries, we’ll dig a bit deeper.

## linear algebra #

Linear algebra is one of those rare fields of mathematics where (almost) everything is fully understood. It is the theory behind vectors, matrices, and multidmensional (rigid) geometry.

## probability theory #

Probability theory and statistics is just as fundamental. It is the mathematical description of randomness, of different “versions” of randomness (i.e. different probability distributions) arising from different random processes, and the study of how these differences lead to differing sampled outcomes. Inferential statistics discusses how to recover probability distributions, or certain aspects of probability distributions, from a finite number of samples.

Ba dum tishhhhh. Yes, this is a joke. ↩︎