This repo contains notebooks with toy examples to build intuitive understanding of Kolmogorov-Arnold Networks (KAN). The repo contains a series of Jupyter notebooks to explore concepts and code to build KANs, designed to build your understanding of KANs gradually, starting from the basics of B-splines used as activation functions and progressing through more complex scenarios including symbolic regression.
Original paper: Liu et al. 2024, KAN: Kolmogorov-Arnold Networks
With the help of toy examples, notebooks are structured to help in understanding both the theoretical underpinnings and practical applications of KANs.
-
- Understanding the mathematical construction of B-splines.
- Exploring how B-splines are used for functional approximation.
-
- Constructing and understanding [1, 1, 1, ..., 1] KAN configurations.
- Implementing and exploring backpropagation through stacked splines.
-
- How to expand model's capacity through grid manipulation.
- How KANs prevent catastrophic forgetting in continual learning?
-
Symbolic Regression using KANs
- Training KANs with fixed symbolic activation functions.
- Understanding the implications of symbolic regression within neural networks.
To follow these tutorials, you should have a basic understanding of machine learning concepts and be familiar with Python programming. Experience with PyTorch and Jupyter Notebooks is also recommended.
Contributions to this tutorial series are welcome! If you have suggestions for improvement or want to add new examples, please feel free to submit a pull request or open an issue.