Skip to content

πŸ“– List of some awesome university courses for Machine Learning! Feel free to contribute!

License

Notifications You must be signed in to change notification settings

RatulGhosh/awesome-machine-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

20 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Awesome Machine Learning Courses Awesome PRs Welcome

Contributing

Please feel free to send pull requests.

Introduction

There are several awesome materials spread all over the internet. This list is my attempt to highlight some of those awesome machine learning courses available online for free.

Table of Contents

Icons

πŸ“Ή - Lecture Videos
πŸ’» - Assignments
πŸ“— - Notes

Basics

  • CSC 2515 Introduction to Machine Learning University of Toronto πŸ’» πŸ“—

    • This course gives an overview of many concepts, techniques, and algorithms in machine learning
    • Lectures Notes
    • Tutorials
  • Statistical Learning Intro to Statistical Learning Stanford University πŸ“Ή πŸ’» πŸ“—

    • This is an introductory-level course in supervised learning, with a focus on regression and classification methods. The syllabus includes: linear and polynomial regression, logistic regression and linear discriminant analysis; cross-validation and the bootstrap, model selection and regularization methods (ridge and lasso); nonlinear models, splines and generalized additive models; tree-based methods, random forests and boosting; support-vector machines. Some unsupervised learning methods are discussed: principal components and clustering (k-means and hierarchical).
    • The lectures cover all the material in An Introduction to Statistical Learning, with Applications in R which is a more application based than the Elements of Statistical Learning (or ESL) book.
  • 8.594J Introduction to Neural Networks MIT πŸ’» πŸ“—

    • This course explores the organization of synaptic connectivity as the basis of neural computation and learning. Perceptrons and dynamical theories of recurrent networks including amplifiers, attractors, and hybrid computation are covered. Additional topics include backpropagation and Hebbian learning, as well as models of perception, motor control, memory, and neural development.
    • Lectures Notes
    • Assignments
  • CS 7641 Supervised Learning Georgia Institute of Technology πŸ“Ή πŸ“—

    • This course includes important Supervised Learning approaches like Machine Learning is the ROX, Decision Trees, Regression and Classification, Neural Networks, Instance-Based Learning, Ensemble B&B, Kernel Methods and Support Vector Machines (SVM)s, Computational Learning Theory, VC Dimensions, Bayesian Learning, Bayesian Inference
    • Video Lectures
  • 6.867 Machine Learning MIT πŸ’» πŸ“—

    • 6.867 is an introductory course on machine learning which gives an overview of many concepts, techniques, and algorithms in machine learning, beginning with topics such as classification and linear regression and ending up with more recent topics such as boosting, support vector machines, hidden Markov models, and Bayesian networks.
    • Lectures Notes
    • Assignments
  • CS 7641 Unsupervised Learning Georgia Institute of Technology πŸ“Ή πŸ“—

    • This course includes important Supervised Learning approaches like Randomized optimization, Clustering, Feature Selection, Feature Transformation, Information Theory
    • Video Lectures
  • 15.097 Prediction MIT πŸ’» πŸ“—

    • Prediction is at the heart of almost every scientific discipline, and the study of generalization (that is, prediction) from data is the central topic of machine learning and statistics, and more generally, data mining. Machine learning and statistical methods are used throughout the scientific world for their use in handling the "information overload" that characterizes our current digital age. Machine learning developed from the artificial intelligence community, mainly within the last 30 years, at the same time that statistics has made major advances due to the availability of modern computing. However, parts of these two fields aim at the same goal, that is, of prediction from data. This course provides a selection of the most important topics from both of these subjects.
    • Lectures Notes
    • Datasets
    • Project Ideas
  • Machine Learning 10-601 Machine Learning Carnegie Mellon University πŸ“Ή πŸ’» πŸ“—

    • This course covers the theory and practical algorithms for machine learning from a variety of perspectives. It covers topics such as Bayesian networks, decision tree learning, Support Vector Machines, statistical learning methods, unsupervised learning and reinforcement learning. The course covers theoretical concepts such as inductive bias, the PAC learning framework, Bayesian learning methods, margin-based learning, and Occam's Razor. Short programming assignments include hands-on experiments with various learning algorithms. This course is designed to give a graduate-level student a thorough grounding in the methodologies, technologies, mathematics and algorithms currently needed by people who do research in machine learning.
    • Lectures
    • Datasets
    • Project Ideas
  • Machine Learning CS229 Machine Learning Stanford University πŸ’» πŸ“—

    • This course provides a broad introduction to machine learning and statistical pattern recognition. Topics include: supervised learning (generative/discriminative learning, parametric/non-parametric learning, neural networks, support vector machines); unsupervised learning (clustering, dimensionality reduction, kernel methods); learning theory (bias/variance tradeoffs; VC theory; large margins); reinforcement learning and adaptive control. The course will also discuss recent applications of machine learning, such as to robotic control, data mining, autonomous navigation, bioinformatics, speech recognition, and text and web data processing.
    • Notes
    • Project Ideas
  • CS 7641 Reinforcement Learning Georgia Institute of Technology πŸ“Ή πŸ“—

    • This course includes important Reinforcement Learning approaches like Markov Decision Processes and Game Theory.
    • Video Lectures
    • Texts
  • UCL course on RL Reinforcement Learning University College London πŸ“Ή πŸ’» πŸ“—

    • This course provides a brief introduction to reinforcement learning
    • Lectures
    • Notes
  • 18.465 Statistical Learning Theory MIT πŸ’» πŸ“—

    • The main goal of this course is to study the generalization ability of a number of popular machine learning algorithms such as boosting, support vector machines and neural networks. Topics include Vapnik-Chervonenkis theory, concentration inequalities in product spaces, and other elements of empirical process theory.
    • Lectures Notes
    • Assignments

Deep Learning

  • CS 231n Convolutional Neural Networks for Visual Recognition Stanford University πŸ’» πŸ“— πŸ“Ή

    • Computer Vision has become ubiquitous in our society, with applications in search, image understanding, apps, mapping, medicine, drones, and self-driving cars. This course is a deep dive into details of the deep learning architectures with a focus on learning end-to-end models for these tasks, particularly image classification. During the 10-week course, students will learn to implement, train and debug their own neural networks and gain a detailed understanding of cutting-edge research in computer vision.
    • Lecture Notes
    • Lecture Videos
    • Github Page
  • CS 224d Deep Learning for Natural Language Processing Stanford University πŸ“Ή πŸ’»

    • Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Applications of NLP are everywhere because people communicate most everything in language: web search, advertisement, emails, customer service, language translation, radiology reports, etc. There are a large variety of underlying tasks and machine learning models powering NLP applications. Recently, deep learning approaches have obtained very high performance across many different NLP tasks. These models can often be trained with a single end-to-end model and do not require traditional, task-specific feature engineering. In this spring quarter course students will learn to implement, train, debug, visualize and invent their own neural network models. The course provides a deep excursion into cutting-edge research in deep learning applied to NLP.
    • Syllabus
    • Lectures and Assignments
  • DS-GA 1008 Deep Learning New York University πŸ“Ή πŸ“— πŸ’»

    • This increasingly popular course is taught through the Data Science Center at NYU. Originally introduced by Yann Lecun, it is now led by Zaid Harchaoui, although Prof. Lecun is rumored to still stop by from time to time. It covers the theory, technique, and tricks that are used to achieve very high accuracy for machine learning tasks in computer vision and natural language processing. The assignments are in Lua and hosted on Kaggle.
    • Course Page
    • Recorded Lectures
  • Machine Learning: 2014-2015 University of Oxford πŸ“Ή πŸ“— πŸ’»

    • The course focusses on neural networks and uses the Torch deep learning library (implemented in Lua) for exercises and assignments. Topics include: logistic regression, back-propagation, convolutional neural networks, max-margin learning, siamese networks, recurrent neural networks, LSTMs, hand-writing with recurrent neural networks, variational autoencoders and image generation and reinforcement learning
    • Lectures and Assignments
    • Source code
  • EECS E6894 Deep Learning for Computer Vision and Natural Language Processing Columbia University πŸ“— πŸ’»

    • This graduate level research class focuses on deep learning techniques for vision and natural language processing problems. It gives an overview of the various deep learning models and techniques, and surveys recent advances in the related fields. This course uses Theano as the main programming tool. GPU programming experiences are preferred although not required. Frequent paper presentations and a heavy programming workload are expected.
    • Readings
    • Assignments
    • Lecture Notes
  • 11-785 Deep Learning Carnegie Mellon University πŸ’»

    • The course presents the subject through a series of seminars and labs, which will explore it from its early beginnings, and work themselves to some of the state of the art. The seminars will cover the basics of deep learning and the underlying theory, as well as the breadth of application areas to which it has been applied, as well as the latest issues on learning from very large amounts of data. We will concentrate largely, although not entirely, on the connectionist architectures that are most commonly associated with it. Lectures and Reading Notes are available on the page.
  • CADL Deep Learning Kadenze Academy πŸ’» πŸ“Ή πŸ“—

    • This course introduces you to deep learning: the state-of-the-art approach to building artificial intelligence algorithms. We cover the basic components of deep learning, what it means, how it works, and develop code necessary to build various algorithms such as deep convolutional networks, variational autoencoders, generative adversarial networks, and recurrent neural networks. A major focus of this course will be to not only understand how to build the necessary components of these algorithms, but also how to apply them for exploring creative applications. We'll see how to train a computer to recognize objects in an image and use this knowledge to drive new and interesting behaviors, from understanding the similarities and differences in large datasets and using them to self-organize, to understanding how to infinitely generate entirely new content or match the aesthetics or contents of another image. Deep learning offers enormous potential for creative applications and in this course we interrogate what's possible. Through practical applications and guided homework assignments, you'll be expected to create datasets, develop and train neural networks, explore your own media collections using existing state-of-the-art deep nets, synthesize new content from generative algorithms, and understand deep learning's potential for creating entirely new aesthetics and new ways of interacting with large amounts of data. Lectures and Reading Notes are available on the page.

Natural Language Processing

  • CS 224N Natural Language Processing Stanford University πŸ“Ή πŸ’» πŸ“—

    • This course introduces the fundamental concepts and ideas in natural language processing (NLP), otherwise known as computational linguistics.The course focuses on modern quantitative techniques in NLP -- using large corpora, statistical models for acquisition, disambiguation, and parsing -- and the construction of representative systems.
    • Lectures and Assignments
  • CS 224D Deep Learning for Natural Language Processing Stanford University πŸ“Ή πŸ’» πŸ“—

    • Natural language processing (NLP) is one of the most important technologies of the information age. Understanding complex language utterances is also a crucial part of artificial intelligence. Applications of NLP are everywhere because people communicate most everything in language: web search, advertisement, emails, customer service, language translation, radiology reports, etc. There are a large variety of underlying tasks and machine learning models powering NLP applications. Recently, deep learning approaches have obtained very high performance across many different NLP tasks. These models can often be trained with a single end-to-end model and do not require traditional, task-specific feature engineering. In this spring quarter course students will learn to implement, train, debug, visualize and invent their own neural network models. The course provides a deep excursion into cutting-edge research in deep learning applied to NLP.
    • Lectures and Assignments
  • 6.864 Advanced Natural Language Processing MIT πŸ“— πŸ’»

    • 6.864 is a graduate introduction to natural language processing, the study of human language from a computational perspective. It covers syntactic, semantic and discourse processing models. The emphasis will be on machine learning or corpus-based methods and algorithms.
    • Assignments
    • Lecture Notes
  • EECS E6894 Deep Learning for Computer Vision and Natural Language Processing Columbia University πŸ“— πŸ’»

    • This graduate level research class focuses on deep learning techniques for vision and natural language processing problems. It gives an overview of the various deep learning models and techniques, and surveys recent advances in the related fields. This course uses Theano as the main programming tool. GPU programming experiences are preferred although not required. Frequent paper presentations and a heavy programming workload are expected.
    • Readings
    • Assignments
    • Lecture Notes
  • INFR11062 Machine Translation University of Edinburgh πŸ“— πŸ’»

    • The course covers fundamental building blocks from linguistics, machine learning, algorithms, data structures, and formal language theory, showing how they apply to a real and difficult problem in artificial intelligence.
    • Assignments
    • Lecture Notes

Data Science

  • EECS E6893 & EECS E6895 Big Data Analytics & Advanced Big Data Analytics Columbia University πŸ’» πŸ“—

    • Students will gain knowledge on analyzing Big Data. It serves as an introductory course for graduate students who are expecting to face Big Data storage, processing, analysis, visualization, and application issues on both workplaces and research environments.
    • Taught by Dr. Ching-Yung Lin
    • Course Site
    • Assignments - Assignments are present in the Course Slides
  • Info 290 Analyzing Big Data with Twitter UC Berkeley school of information πŸ“Ή πŸ“—

    • In this course, UC Berkeley professors and Twitter engineers provide lectures on the most cutting-edge algorithms and software tools for data analytics as applied to Twitter's data. Topics include applied natural language processing algorithms such as sentiment analysis, large scale anomaly detection, real-time search, information diffusion and outbreak detection, trend detection in social streams, recommendation algorithms, and advanced frameworks for distributed computing.
    • Lecture Videos
    • Previous Years coursepage
  • CS 156 Learning from Data Caltech πŸ“Ή πŸ’»

    • This is an introductory course in machine learning (ML) that covers the basic theory, algorithms, and applications. ML is a key technology in Big Data, and in many financial, medical, commercial, and scientific applications. It enables computational systems to adaptively improve their performance with experience accumulated from the observed data. ML has become one of the hottest fields of study today, taken up by undergraduate and graduate students from 15 different majors at Caltech. This course balances theory and practice, and covers the mathematical as well as the heuristic aspects.
    • Lectures
    • Homework
    • Textbook
  • CS 4786 Machine Learning for Data Science Cornell University πŸ’» πŸ“Ή πŸ“—

    • An introductory course in machine learning, with a focus on data modeling and related methods and learning algorithms for data sciences. Tentative topic list:
      • Dimensionality reduction, such as principal component analysis (PCA) and the singular value decomposition (SVD), canonical correlation analysis (CCA), independent component analysis (ICA), compressed sensing, random projection, the information bottleneck. (We expect to cover some, but probably not all, of these topics).
      • Clustering, such as k-means, Gaussian mixture models, the expectation-maximization (EM) algorithm, link-based clustering. (We do not expect to cover hierarchical or spectral clustering.).
      • Probabilistic-modeling topics such as graphical models, latent-variable models, inference (e.g., belief propagation), parameter learning.
      • Regression will be covered if time permits.
    • Assignments
    • Lectures
  • CS 109 Data Science Harvard University πŸ’» πŸ“—

    • Learning from data in order to gain useful predictions and insights. This course introduces methods for five key facets of an investigation: data wrangling, cleaning, and sampling to get a suitable data set; data management to be able to access big data quickly and reliably; exploratory data analysis to generate hypotheses and intuition; prediction based on statistical methods such as regression and classification; and communication of results through visualization, stories, and interpretable summaries.
    • Lectures
    • Slides
    • Labs and Assignments
    • 2014 Lectures

Related to ML

  • UBC Computer Science 322 Introduction to Artificial Intelligence Unviersity of British Columbia πŸ’» πŸ“—

    • This course provides an introduction to the field of artificial intelligence. The major topics covered will include reasoning and representation, search, constraint satisfaction problems, planning, logic, reasoning under uncertainty, and planning under uncertainty.
    • Textbook
    • Lecture Lectures
  • SLAM Course - WS13/14 Robot Mapping Albert-Ludwigs-UniversitΓ€t Freiburg πŸ“Ή πŸ“—

    • The course will cover different topics and techniques in the context of environment modeling with mobile robots. It includes techniques such as SLAM with the family of Kalman filters, information filters, particle filters, graph-based approaches, least-squares error minimization, techniques for place recognition and appearance-based mapping, and data association.
    • Lectures Notes
    • Video Lectures
  • CSCI 512 - Computer Vision Computer Vision Colorado School of Mines πŸ“Ή πŸ“—

    • This course provides an overview of this field, starting with image formation and low level image processing. The course contains detail theory and techniques for extracting features from images, measuring shape and location, and recognizing objects. Design ability and hands-on projects will be emphasized, using image processing software and hardware systems.
    • Textbook
    • Lectures Notes
    • Video Lectures
    • Project Ideas
  • CAP5415-Computer Vision Computer Vision University of Central Florida πŸ“Ή πŸ“—

    • The course is introductory level computer vision course. It will cover the basic topics of computer vision, and introduce some fundamental approaches for computer vision research: Image Filtering, Edge Detection, Interest Point Detectors Motion and Optical Flow Object Detection and Tracking Region/Boundary Segmentation Shape Analysis and Statistical Shape Models Deep Learning for Computer Vision Imaging Geometry, Camera Modeling and Calibration
    • Textbook
    • Lectures Notes
    • Video Lectures
  • CVX 101 Convex Optimization Stanford University πŸ’» πŸ“—

    • The course concentrates on recognizing and solving convex optimization problems that arise in applications. Topics addressed include the following. Convex sets, functions, and optimization problems. Basics of convex analysis. Least-squares, linear and quadratic programs, semidefinite programming, minimax, extremal volume, and other problems. Optimality conditions, duality theory, theorems of alternative, and applications. Interior-point methods. Applications to signal processing, statistics and machine learning, control and mechanical engineering, digital and analog circuit design, and finance.
    • Textbook
    • Lectures and Assignments
  • CS 7637 CS 7637: Knowledge-Based Artificial Intelligence Georgia Institute of Technology πŸ“Ή

    • The course will cover three kinds of topics: (1) core topics such as knowledge representation, planning, constraint satisfaction, case-based reasoning, knowledge revision, incremental concept learning, and explanation-based learning, (2) common tasks such as classification, diagnosis, and design, and (3) advanced topics such as analogical reasoning, visual reasoning, and meta-reasoning.
    • Video Lectures
  • CS395T Statistical and Discrete Methods for Scientific Computing University of Texas πŸ“Ή πŸ“— πŸ’»

    • Practical course in applying modern statistical techniques to real data, particularly bioinformatic data and large data sets. The emphasis is on efficient computation and concise coding, mostly in MATLAB and C++. Topics covered include probability theory and Bayesian inference; univariate distributions; Central Limit Theorem; generation of random deviates; tail (p-value) tests; multiple hypothesis correction; empirical distributions; model fitting; error estimation; contingency tables; multivariate normal distributions; phylogenetic clustering; Gaussian mixture models; EM methods; maximum likelihood estimation; Markov Chain Monte Carlo; principal component analysis; dynamic programming; hidden Markov models; performance measures for classifiers; support vector machines; Wiener filtering; wavelets; multidimensional interpolation; information theory.
    • Lectures and Assignments
  • 10-708 Probabilistic Graphical Models Carnegie Mellon University πŸ“Ή πŸ“— πŸ’»

  • Many of the problems in artificial intelligence, statistics, computer systems, computer vision, natural language processing, and computational biology, among many other fields, can be viewed as the search for a coherent global conclusion from local information. The probabilistic graphical models framework provides a unified view for this wide range of problems, enabling efficient inference, decision-making and learning in problems with a very large number of attributes and huge datasets. This graduate-level course will provide you with a strong foundation for both applying graphical models to complex problems and for addressing core research topics in graphical models.

  • Lecture Videos

  • Assignments

  • Lecture notes

  • Readings

  • CS 8803 Artificial Intelligence for Robotics Artificial Intelligence Georgia Institute of Technology πŸ“Ή

    • This course will teach students basic methods in Artificial Intelligence, including: probabilistic inference, planning and search, localization, tracking and control, all with a focus on robotics.
    • Video Lectures
  • EE103 Introduction to Matrix Methods Stanford University πŸ’» πŸ“—

    • The course covers the basics of matrices and vectors, solving linear equations, least-squares methods, and many applications. It'll cover the mathematics, but the focus will be on using matrix methods in applications such as tomography, image processing, data fitting, time series prediction, finance, and many others. EE103 is based on a book that Stephen Boyd and Lieven Vandenberghe are currently writing. Students will use a new language called Julia to do computations with matrices and vectors.
    • Lectures
    • Book
    • Assignments
    • Code

Something for fun

Miscellaneous

About

πŸ“– List of some awesome university courses for Machine Learning! Feel free to contribute!

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published