Giter Site home page Giter Site logo

pranavgupta2603 / kan-distillation Goto Github PK

View Code? Open in Web Editor NEW
0.0 1.0 0.0 2.41 MB

An implementation of the KAN architecture using learnable activation functions for knowledge distillation on the MNIST handwritten digits dataset. The project demonstrates distilling a three-layer teacher KAN model into a more compact two-layer student model, comparing the performance impacts of distillation versus non-distilled models.

Jupyter Notebook 48.73% Python 51.27%
kan knowledge-distillation kolmogorov-arnold-networks kolmogorov-arnold-representation mnist-classification

kan-distillation's Introduction

Simple Knowledge Distillation using KANs on MNIST Handwritten Digits

This project implements the KAN architecture, which utilizes learnable activation functions on the edges of the network with parametrized spline functions, replacing traditional fixed activation functions and linear weights. This approach offers a promising alternative to Multi-Layer Perceptrons (MLPs), with potential applications in data fitting, Partial Differential Equations (PDEs) solving, and more.

Here I have used the KAN implementation to distill a deep teacher KAN with 3 layers:

teacher_model = KAN([28*28, 64, 10])

to a lighter student model with 2 layers:

student_model = KAN([28*28, 10])

The KAN code is referenced from the efficient_kan repository.

Get started with KAN-Distillation by following these steps:

  1. Clone the Repository:
    git clone https://github.com/pranavgupta2603/KAN-Distillation
    
  2. Enter the directory:
    cd KAN-Distillation
    
  3. Install Dependencies:
    pip install -r requirements.txt
    

Documentation

  • distill.ipynb: Contains the code to train the student model
  • mnist.py: An extension to the code from efficient_kan to train a non-distilled model
  • requirements.txt: Contains all the necessary Python packages.

Results

The MNIST dataset is a very simple dataset which may cause a very negligible increase in accuracy from a non-distilled model to a distilled model.

Accuracy of the NOT distilled student model on the test set: 93.65%
Accuracy of the DISTILLED student model on the test set: 93.75%
Accuracy of the teacher model on the test set: 97.23%

License

This software is released under the MIT License.

kan-distillation's People

Contributors

pranavgupta2603 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.