google-deepmind / language_modeling_is_compressionLinks
☆151Updated last year
Alternatives and similar repositories for language_modeling_is_compression
Users that are interested in language_modeling_is_compression are comparing it to the libraries listed below
Sorting:
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 2 months ago
- Some preliminary explorations of Mamba's context scaling.☆217Updated last year
- Physics of Language Models, Part 4☆236Updated last month
- ☆195Updated this week
- ☆98Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆179Updated 2 months ago
- ☆103Updated 11 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆176Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated 11 months ago
- Understand and test language model architectures on synthetic tasks.☆222Updated last month
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆227Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆89Updated 8 months ago
- Replicating O1 inference-time scaling laws☆89Updated 9 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆170Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆74Updated 7 months ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆136Updated last year
- ☆72Updated last year
- ☆85Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆57Updated 11 months ago
- ☆149Updated 2 years ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆153Updated 4 months ago
- [ICML'24 Oral] The official code of "DiJiang: Efficient Large Language Models through Compact Kernelization", a novel DCT-based linear at…☆102Updated last year
- ☆80Updated 6 months ago
- ☆52Updated 2 months ago
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆293Updated 3 months ago
- [ICLR 2023] "Learning to Grow Pretrained Models for Efficient Transformer Training" by Peihao Wang, Rameswar Panda, Lucas Torroba Hennige…☆92Updated last year
- The HELMET Benchmark☆169Updated 2 weeks ago
- ☆187Updated 4 months ago