Motivational intro; what is AI; explicit/implicit approaches; machine learning and its types (supervised/unsupervised/reinforcement); local and global generalization; search methods.
Review of k-nearest neighbours (as an instance of a distance-based, lazy method); the naïve Bayes classifier (as an instance of a method that considers each feature independently); the decision tree (considers combinations of features); ensembles.
Artificial neuron and its function, linear separability, activation functions; artificial neural networks, types of architectures; neural networks for regression and classification; how to compute the gradients
Motivational examples: the deep learning boom; why depth helps; challenges to deep learning, modern deep learning; deep learning architectures; regularization in deep learning + popular tricks
Motivational example: face recognition and clustering; distance measures, preprocessing, learning; embeddings in general (classifiers, word embeddings, dimensionality reduction, reinforcement learning, ...);
Machine learning and hyperparameters; hyperparameter optimization; Gaussian processes, MLE, MAPE vs. the full Bayesian approach; Bayesian optimization; optimization of hyperparameters: examples