tanaymeh / mamba-train
A single repo with all scripts and utils to train / fine-tune the Mamba model with or without FIM
☆51Updated 10 months ago
Alternatives and similar repositories for mamba-train:
Users that are interested in mamba-train are comparing it to the libraries listed below
- A repository for research on medium sized language models.☆76Updated 8 months ago
- My fork os allen AI's OLMo for educational purposes.☆30Updated 2 months ago
- ☆71Updated 6 months ago
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆35Updated 9 months ago
- RWKV-7: Surpassing GPT☆77Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆116Updated 2 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆96Updated 4 months ago
- Collection of autoregressive model implementation☆81Updated this week
- The simplest, fastest repository for training/finetuning medium-sized xLSTMs.☆39Updated 8 months ago
- GoldFinch and other hybrid transformer components☆43Updated 7 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆149Updated 2 months ago
- QuIP quantization☆49Updated 11 months ago
- Implementation of Infini-Transformer in Pytorch☆109Updated last month
- ☆181Updated this week
- A byte-level decoder architecture that matches the performance of tokenized Transformers.☆65Updated 9 months ago
- PyTorch implementation of models from the Zamba2 series.☆176Updated 3 weeks ago
- Prune transformer layers☆67Updated 8 months ago
- Griffin MQA + Hawk Linear RNN Hybrid☆85Updated 9 months ago
- NanoGPT (124M) quality in 2.67B tokens☆27Updated this week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆196Updated 3 weeks ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆57Updated 3 weeks ago
- Normalized Transformer (nGPT)☆152Updated 3 months ago
- Train, tune, and infer Bamba model☆84Updated last month
- ☆44Updated 3 months ago
- Set of scripts to finetune LLMs☆36Updated 10 months ago
- Evaluating the Mamba architecture on the Othello game☆44Updated 9 months ago
- ☆125Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆118Updated 5 months ago