HazyResearch / m2Links
Repo for "Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture"
☆550Updated 5 months ago
Alternatives and similar repositories for m2
Users that are interested in m2 are comparing it to the libraries listed below
Sorting:
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆288Updated last year
- Large Context Attention☆711Updated 4 months ago
- Annotated version of the Mamba paper☆482Updated last year
- Implementation of the conditionally routed attention in the CoLT5 architecture, in Pytorch☆228Updated 8 months ago
- Understand and test language model architectures on synthetic tasks.☆195Updated 2 months ago
- Official PyTorch implementation of QA-LoRA☆135Updated last year
- Implementation of Recurrent Memory Transformer, Neurips 2022 paper, in Pytorch☆408Updated 4 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆727Updated 8 months ago
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆514Updated 2 weeks ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆851Updated this week
- A repository for research on medium sized language models.☆495Updated 3 weeks ago
- The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction☆386Updated 10 months ago
- The repository for the code of the UltraFastBERT paper☆514Updated last year
- Implementation of MEGABYTE, Predicting Million-byte Sequences with Multiscale Transformers, in Pytorch☆642Updated 5 months ago
- ☆190Updated this week
- Code for Adam-mini: Use Fewer Learning Rates To Gain More https://arxiv.org/abs/2406.16793☆417Updated 2 weeks ago
- Implementation of ST-Moe, the latest incarnation of MoE after years of research at Brain, in Pytorch☆333Updated 11 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆876Updated last month
- Multipack distributed sampler for fast padding-free training of LLMs☆188Updated 9 months ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆611Updated last year
- Extend existing LLMs way beyond the original training length with constant memory usage, without retraining☆697Updated last year
- Implementation of DoRA☆294Updated 11 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆233Updated 3 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆454Updated last year
- ☆258Updated last year
- Website for hosting the Open Foundation Models Cheat Sheet.☆267Updated 3 weeks ago
- Official repository for ORPO☆453Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆635Updated 10 months ago
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆202Updated 3 weeks ago
- batched loras☆343Updated last year