HazyResearch / m2Links
Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
☆560Updated last year
Alternatives and similar repositories for m2
Users that are interested in m2 are comparing it to the libraries listed below
Sorting:
- The repository for the code of the UltraFastBERT paper☆519Updated last year
- Annotated version of the Mamba paper☆493Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆294Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆655Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆422Updated last year
- Code repository for the paper - "Matryoshka Representation Learning"☆587Updated last year
- ☆206Updated 3 weeks ago
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆733Updated last year
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆549Updated 7 months ago
- Huggingface compatible implementation of RetNet (Retentive Networks, https://arxiv.org/pdf/2307.08621.pdf) including parallel, recurrent,…☆227Updated last year
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆389Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆936Updated last month
- A repository for log-time feedforward networks☆224Updated last year
- Understand and test language model architectures on synthetic tasks.☆248Updated 3 months ago
- Language Modeling with the H3 State Space Model☆521Updated 2 years ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆375Updated last year
- A repository for research on medium sized language models.☆524Updated 7 months ago
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆231Updated last year
- Code repository for Black Mamba☆260Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Updated last year
- ☆314Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆365Updated last year
- Multipack distributed sampler for fast padding-free training of LLMs☆203Updated last year
- Effortless plugin and play Optimizer to cut model training costs by 50%. New optimizer that is 2x faster than Adam on LLMs.☆383Updated last year
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆448Updated 7 months ago
- Ungreedy subword tokenizer and vocabulary trainer for Python, Go & Javascript☆609Updated last year
- Scaling Data-Constrained Language Models☆343Updated 6 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆244Updated 7 months ago
- Quick implementation of nGPT, learning entirely on the hypersphere, from NvidiaAI☆294Updated 7 months ago
- Large Context Attention☆758Updated 2 months ago