yu-rp / KANbeFairLinks
A More Fair and Comprehensive Comparison between KAN and MLP
☆175Updated last year
Alternatives and similar repositories for KANbeFair
Users that are interested in KANbeFair are comparing it to the libraries listed below
Sorting:
- ☆75Updated 8 months ago
- Benchmark for efficiency in memory and time of different KAN implementations.☆134Updated last year
- Awesome list of papers that extend Mamba to various applications.☆138Updated 4 months ago
- Kolmogorov–Arnold Networks with modified activation (using MLP to represent the activation)☆106Updated 3 weeks ago
- C++ and Cuda ops for fused FourierKAN☆81Updated last year
- State Space Models☆70Updated last year
- ☆42Updated last year
- The official repository for HyperZ⋅Z⋅W Operator Connects Slow-Fast Networks for Full Context Interaction.☆40Updated 6 months ago
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆161Updated 9 months ago
- ☆137Updated last year
- A Triton Kernel for incorporating Bi-Directionality in Mamba2☆75Updated 10 months ago
- [ICLR 2025 Spotlight] Official Implementation for ToST (Token Statistics Transformer)☆123Updated 8 months ago
- Implementation of MoE Mamba from the paper: "MoE-Mamba: Efficient Selective State Space Models with Mixture of Experts" in Pytorch and Ze…☆112Updated last week
- ☆96Updated last year
- PyTorch Implementation of Jamba: "Jamba: A Hybrid Transformer-Mamba Language Model"☆192Updated this week
- Kolmogorov-Arnold Networks (KAN) using Chebyshev polynomials instead of B-splines.☆391Updated last year
- Pytorch Implementation of the sparse attention from the paper: "Generating Long Sequences with Sparse Transformers"☆91Updated last week
- Official PyTorch Implementation of "The Hidden Attention of Mamba Models"☆228Updated 2 weeks ago
- When it comes to optimizers, it's always better to be safe than sorry☆375Updated last month
- Integrating Mamba/SSMs with Transformer for Enhanced Long Context and High-Quality Sequence Modeling☆207Updated last week
- [NeurIPS 2025 Spotlight] TPA: Tensor ProducT ATTenTion Transformer (T6) (https://arxiv.org/abs/2501.06425)☆401Updated last week
- Implementation of Griffin from the paper: "Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models"☆56Updated last week
- ☆68Updated last year
- Implementation of a multimodal diffusion transformer in Pytorch☆106Updated last year
- Reading list for research topics in state-space models☆329Updated 4 months ago
- Inference Speed Benchmark for Learning to (Learn at Test Time): RNNs with Expressive Hidden States☆74Updated last year
- An easy to use PyTorch implementation of the Kolmogorov Arnold Network and a few novel variations☆186Updated 11 months ago
- FastKAN: Very Fast Implementation of Kolmogorov-Arnold Networks (KAN)☆442Updated last year
- Benchmarking and Testing FastKAN☆86Updated last year
- Implementation of Agent Attention in Pytorch☆91Updated last year