HazyResearch / m2Links
Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
☆555Updated 7 months ago
Alternatives and similar repositories for m2
Users that are interested in m2 are comparing it to the libraries listed below
Sorting:
- The repository for the code of the UltraFastBERT paper☆516Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆290Updated last year
- Annotated version of the Mamba paper☆487Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆388Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆899Updated 3 months ago
- ☆192Updated last week
- A repository for research on medium sized language models.☆506Updated last month
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆226Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆412Updated 6 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆532Updated 2 months ago
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆646Updated 7 months ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆702Updated last year
- Scaling Data-Constrained Language Models☆338Updated last month
- Code repository for the paper - "Matryoshka Representation Learning"☆531Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆379Updated last year
- Understand and test language model architectures on synthetic tasks.☆221Updated 3 weeks ago
- Official PyTorch implementation of QA-LoRA☆138Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆208Updated 2 months ago
- Large Context Attention☆719Updated 6 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆458Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆342Updated 7 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆595Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆238Updated last month
- Language Modeling with the H3 State Space Model☆519Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆199Updated 11 months ago
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆431Updated 2 months ago
- Recurrent Memory Transformer☆150Updated last year
- A repository for log-time feedforward networks☆222Updated last year
- ☆529Updated 8 months ago
- The official implementation of “Sophia: A Scalable Stochastic Second-order Optimizer for Language Model Pre-training”☆965Updated last year