Skip to content

vitalfedorov/ML-Tracker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 

Repository files navigation

ML-Tracker

Knowledge base

Stanford Cheatsheet

CS 230 - Deep Learning

Stanford CNN

CS231n: Convolutional Neural Networks for Visual Recognition

Face recognition

How to Choose a Loss Function For Face Recognition

Nonlinear optimization

Gradient Descent

An overview of gradient descent optimization algorithms

Newton’s Method

Nonlinear Optimization Using Newton’s Method

Halley’s Method

Nonlinear Optimization Using Halley’s Method

An Interactive Tutorial on Numerical Optimization

An Interactive Tutorial on Numerical Optimization

Alternatives to the Gradient Descent Algorithm

Alternatives to the Gradient Descent Algorithm

Sequences

The Unreasonable Effectiveness of Recurrent Neural Networks
Character-level recurrent sequence-to-sequence model - Keras Example
Understanding LSTM Networks

Attention

A simple overview of RNN, LSTM and Attention Mechanism
A Deep Dive Into the Transformer Architecture – The Development of Transformer Models
Transformers from scratch
Attention Mechanism

Sequence to Sequence (seq2seq) and Attention
Building Seq2Seq LSTM with Luong Attention in Keras for Time Series Forecasting
A Comprehensive Guide to Attention Mechanism in Deep Learning for Everyone
Attention Mechanisms in Recurrent Neural Networks (RNNs) With Keras
A simple overview of RNN, LSTM and Attention Mechanism
seq2seq Part F Encoder Decoder with Bahdanau & Luong Attention Mechanism.ipynb

Dimensionality reduction

Encoder-Decoders model

The encoder-decoder model as a dimensionality reduction technique

Principal Component Analysis - PCA

Essential Math for Data Science: Eigenvectors and application to PCA

Eigenvectors

Determinant of a Matrix
Eigenvector and Eigenvalue
A geometric interpretation of the covariance matrix

Other

6 Types of “Feature Importance” Any Data Scientist Should Know
Bayesian Optimization with Python
Variance Reduction

Papers

Long Short-Term Memory Recurrent Neural Network Architectures for Large Scale Acoustic Modeling
Auto-encoder based Model for High-dimensional Imbalanced Industrial Data
TRAINING RECURRENT NEURAL NETWORKS by Ilya Sutskever
Deep learning via Hessian-free optimization
Accelerated gradient methods combining Tikhonov regularization with geometric damping driven by the Hessian

Books

Bayesian Methods for Hackers
An Introduction to Statistical Learning

DataSets

Seagate Soft Sensing Data sets

The Open-Source Data Science Masters

The open-source curriculum for learning Data Science

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published