Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
☆563Dec 28, 2024Updated last year
Alternatives and similar repositories for m2
Users that are interested in m2 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Understand and test language model architectures on synthetic tasks.☆263Updated this week
- FlashFFTConv: Efficient Convolutions for Long Sequences with Tensor Cores☆344Dec 28, 2024Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆250Jun 6, 2025Updated 9 months ago
- Convolutions for Sequence Modeling☆911Jun 13, 2024Updated last year
- ☆63Oct 3, 2024Updated last year
- Language Modeling with the H3 State Space Model☆522Sep 29, 2023Updated 2 years ago
- HGRN2: Gated Linear RNNs with State Expansion☆56Aug 20, 2024Updated last year
- ☆58Jul 9, 2024Updated last year
- Official Repository of Pretraining Without Attention (BiGS), BiGS is the first model to achieve BERT-level transfer learning on the GLUE …☆118Mar 16, 2024Updated 2 years ago
- Train Models Contrastively in Pytorch☆779Mar 26, 2025Updated 11 months ago
- YaRN: Efficient Context Window Extension of Large Language Models☆1,685Apr 17, 2024Updated last year
- The repository for the code of the UltraFastBERT paper☆519Mar 24, 2024Updated last year
- An annotated implementation of the Hyena Hierarchy paper☆34May 28, 2023Updated 2 years ago
- Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models☆341Feb 23, 2025Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,322Mar 6, 2025Updated last year
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,722Jun 25, 2024Updated last year
- Linear Attention Sequence Parallelism (LASP)☆88Jun 4, 2024Updated last year
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 6 months ago
- Efficient PScan implementation in PyTorch☆17Jan 2, 2024Updated 2 years ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,201Jul 11, 2024Updated last year
- Official implementation of "Hydra: Bidirectional State Space Models Through Generalized Matrix Mixers"☆170Jan 30, 2025Updated last year
- Foundation Architecture for (M)LLMs☆3,137Apr 11, 2024Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Feb 8, 2024Updated 2 years ago
- The official repository for our paper "The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization".☆34Jun 11, 2025Updated 9 months ago
- RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable)…☆14,419Mar 5, 2026Updated 2 weeks ago
- ☆10Oct 2, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Mar 17, 2026Updated last week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,965Mar 16, 2026Updated last week
- [NeurIPS 2023 spotlight] Official implementation of HGRN in our NeurIPS 2023 paper - Hierarchically Gated Recurrent Neural Network for Se…☆68Apr 24, 2024Updated last year
- Sequence modeling with Mega.☆303Jan 28, 2023Updated 3 years ago
- Official PyTorch Implementation of the Longhorn Deep State Space Model☆57Dec 4, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- Large Context Attention☆769Oct 13, 2025Updated 5 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,630Updated this week
- Accelerated First Order Parallel Associative Scan☆196Jan 7, 2026Updated 2 months ago
- ColBERT: state-of-the-art neural search (SIGIR'20, TACL'21, NeurIPS'21, NAACL'22, CIKM'22, ACL'23, EMNLP'23)☆3,799Oct 14, 2025Updated 5 months ago
- ☆16Dec 9, 2023Updated 2 years ago
- ☆35Apr 12, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month