AdityaNG / kan-gptLinks
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
☆726Updated last year
Alternatives and similar repositories for kan-gpt
Users that are interested in kan-gpt are comparing it to the libraries listed below
Sorting:
- ☆749Updated last year
- Variations of Kolmogorov-Arnold Networks☆115Updated last year
- This project extends the idea of the innovative architecture of Kolmogorov-Arnold Networks (KAN) to the Convolutional Layers, changing th…☆908Updated 8 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆121Updated last year
- A comprehensive collection of KAN(Kolmogorov-Arnold Network)-related resources, including libraries, projects, tutorials, papers, and mor…☆3,136Updated this week
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆930Updated last month
- An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).☆4,544Updated last year
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆565Updated last year
- Build high-performance AI models with modular building blocks☆571Updated last month
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆938Updated last year
- ☆446Updated last year
- Official repository of Evolutionary Optimization of Model Merging Recipes☆1,390Updated last year
- Schedule-Free Optimization in PyTorch☆2,240Updated 6 months ago
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆200Updated last month
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆560Updated 11 months ago
- UNet diffusion model in pure CUDA☆656Updated last year
- First-principle implementations of groundbreaking AI algorithms using a wide range of deep learning frameworks, accompanied by supporting…☆180Updated 4 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆445Updated 7 months ago
- Naively combining transformers and Kolmogorov-Arnold Networks to learn and experiment☆37Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,431Updated this week
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆431Updated last year
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆686Updated last year
- Best practices & guides on how to write distributed pytorch training code☆552Updated last month
- Simple, minimal implementation of the Mamba SSM in one pytorch file. Using logcumsumexp (Heisen sequence).☆128Updated last year
- From scratch implementation of a sparse mixture of experts language model inspired by Andrej Karpathy's makemore :)☆776Updated last year
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest…☆462Updated last month
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆294Updated last year
- Annotated version of the Mamba paper☆492Updated last year
- Open weights language model from Google DeepMind, based on Griffin.☆656Updated 6 months ago