google-deepmind / language_modeling_is_compressionLinks
☆170Updated last year
Alternatives and similar repositories for language_modeling_is_compression
Users that are interested in language_modeling_is_compression are comparing it to the libraries listed below
Sorting:
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- ☆112Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- Code accompanying the paper "Massive Activations in Large Language Models"☆195Updated last year
- ☆203Updated 9 months ago
- ☆75Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆84Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆182Updated 7 months ago
- ☆91Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆246Updated 7 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- ☆108Updated last year
- Code for studying the super weight in LLM☆120Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆109Updated 3 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆236Updated 3 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆133Updated 2 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆104Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- Physics of Language Models: Part 4.2, Canon Layers at Scale where Synthetic Pretraining Resonates in Reality☆314Updated 3 weeks ago
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆230Updated last year
- Understand and test language model architectures on synthetic tasks.☆251Updated 2 weeks ago
- Replicating O1 inference-time scaling laws☆91Updated last year
- ☆207Updated 2 weeks ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Updated last year
- The official github repo for "Diffusion Language Models are Super Data Learners".☆219Updated 2 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆123Updated last year
- ☆107Updated last year