AdityaNG / kan-gptLinks
The PyTorch implementation of Generative Pre-trained Transformers (GPTs) using Kolmogorov-Arnold Networks (KANs) for language modeling
☆726Updated last year
Alternatives and similar repositories for kan-gpt
Users that are interested in kan-gpt are comparing it to the libraries listed below
Sorting:
- Kolmogorov-Arnold Networks (KAN) using Chebyshev polynomials instead of B-splines.☆400Updated last year
- ☆748Updated last year
- FastKAN: Very Fast Implementation of Kolmogorov-Arnold Networks (KAN)☆461Updated last year
- Variations of Kolmogorov-Arnold Networks☆116Updated last year
- This project extends the idea of the innovative architecture of Kolmogorov-Arnold Networks (KAN) to the Convolutional Layers, changing th…☆911Updated 8 months ago
- Training small GPT-2 style models using Kolmogorov-Arnold networks.☆122Updated last year
- An easy to use PyTorch implementation of the Kolmogorov Arnold Network and a few novel variations☆186Updated last year
- ☆446Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆936Updated last month
- An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).☆4,551Updated last year
- Build high-performance AI models with modular building blocks☆576Updated 2 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆940Updated last year
- A comprehensive collection of KAN(Kolmogorov-Arnold Network)-related resources, including libraries, projects, tutorials, papers, and mor…☆3,148Updated 3 weeks ago
- Official repository for the paper "Grokfast: Accelerated Grokking by Amplifying Slow Gradients"☆569Updated last year
- Understanding Kolmogorov-Arnold Networks: A Tutorial Series on KAN using Toy Examples☆203Updated 7 months ago
- Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"☆560Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆294Updated last year
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆687Updated last year
- A novel implementation of fusing ViT with Mamba into a fast, agile, and high performance Multi-Modal Model. Powered by Zeta, the simplest…☆462Updated 2 months ago
- Annotated version of the Mamba paper☆493Updated last year
- An implementation of "Retentive Network: A Successor to Transformer for Large Language Models"☆1,211Updated 2 years ago
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,434Updated this week
- 👁️ + 💬 + 🎧 = 🤖 Curated list of top foundation and multimodal models! [Paper + Code + Examples + Tutorials]☆635Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆203Updated this week
- Schedule-Free Optimization in PyTorch☆2,244Updated 7 months ago
- Official repository of the xLSTM.☆2,081Updated 2 months ago
- LoRA and DoRA from Scratch Implementations☆215Updated last year
- Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"☆432Updated last year
- llama3.np is a pure NumPy implementation for Llama 3 model.☆993Updated 8 months ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆981Updated last year