flawedmatrix / mamba-ssmLinks
Implementation of mamba with rust
☆87Updated last year
Alternatives and similar repositories for mamba-ssm
Users that are interested in mamba-ssm are comparing it to the libraries listed below
Sorting:
- RWKV-7: Surpassing GPT☆91Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆41Updated last year
- A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM☆55Updated last year
- https://x.com/BlinkDL_AI/status/1884768989743882276☆28Updated last month
- Implementation of MambaByte in "MambaByte: Token-free Selective State Space Model" in Pytorch and Zeta☆118Updated 2 months ago
- PyTorch implementation of models from the Zamba2 series.☆182Updated 5 months ago
- 1.58-bit LLaMa model☆81Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated 8 months ago
- Explorations into the proposal from the paper "Grokfast, Accelerated Grokking by Amplifying Slow Gradients"☆101Updated 6 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆89Updated last year
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆63Updated last year
- ☆98Updated 5 months ago
- RWKV, in easy to read code☆72Updated 3 months ago
- Modified Mamba code to run on CPU☆30Updated last year
- ☆190Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆239Updated 4 months ago
- Testing LLM reasoning abilities with family relationship quizzes.☆62Updated 4 months ago
- an open source reproduction of NVIDIA's nGPT (Normalized Transformer with Representation Learning on the Hypersphere)☆101Updated 3 months ago
- Lightweight toolkit package to train and fine-tune 1.58bit Language models☆80Updated last month
- RWKV in nanoGPT style☆191Updated last year
- Code repository for Black Mamba☆247Updated last year
- ☆133Updated 10 months ago
- Inference of Mamba models in pure C☆187Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆31Updated last year
- Collection of autoregressive model implementation☆85Updated 2 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 8 months ago
- Fast modular code to create and train cutting edge LLMs☆67Updated last year
- GoldFinch and other hybrid transformer components☆45Updated 11 months ago
- look how they massacred my boy☆63Updated 8 months ago
- Attempt to make multiple residual streams from Bytedance's Hyper-Connections paper accessible to the public☆85Updated last week