HazyResearch / m2Links
Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
☆560Updated 9 months ago
Alternatives and similar repositories for m2
Users that are interested in m2 are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆291Updated last year
- Annotated version of the Mamba paper☆489Updated last year
- The repository for the code of the UltraFastBERT paper☆518Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆722Updated last year
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆915Updated 5 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆653Updated 9 months ago
- ☆200Updated last month
- A repository for research on medium sized language models.☆514Updated 4 months ago
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆417Updated 9 months ago
- Code repository for the paper - "Matryoshka Representation Learning"☆569Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆542Updated 5 months ago
- Understand and test language model architectures on synthetic tasks.☆233Updated last month
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 5 months ago
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆933Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆465Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆241Updated 4 months ago
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆382Updated last year
- Beyond Language Models: Byte Models are Digital World Simulators☆329Updated last year
- Large Context Attention☆743Updated last week
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆385Updated last year
- Code repository for Black Mamba☆257Updated last year
- Reference implementation of Megalodon 7B model☆522Updated 5 months ago
- ☆415Updated last year
- Official PyTorch implementation of QA-LoRA☆141Updated last year
- Convolutions for Sequence Modeling☆900Updated last year
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆976Updated last year
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆154Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year