google-deepmind / language_modeling_is_compressionLinks
☆165Updated last year
Alternatives and similar repositories for language_modeling_is_compression
Users that are interested in language_modeling_is_compression are comparing it to the libraries listed below
Sorting:
- ☆109Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 5 months ago
- Some preliminary explorations of Mamba's context scaling.☆217Updated last year
- ☆106Updated last year
- ☆75Updated last year
- ☆205Updated 2 weeks ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆143Updated last year
- ☆88Updated last year
- Replicating O1 inference-time scaling laws☆90Updated 11 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆231Updated last month
- Physics of Language Models, Part 4☆260Updated 4 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆58Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆180Updated 5 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆131Updated 3 weeks ago
- Understand and test language model architectures on synthetic tasks.☆240Updated 2 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆187Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆175Updated last year
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Sparse Backpropagation for Mixture-of-Expert Training☆29Updated last year
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆85Updated 9 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆67Updated 9 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆157Updated 7 months ago
- Stick-breaking attention☆61Updated 4 months ago
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆55Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆85Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆121Updated last year
- Normalized Transformer (nGPT)☆194Updated last year
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆141Updated 9 months ago
- Official implementation of Phi-Mamba. A MOHAWK-distilled model (Transformers to SSMs: Distilling Quadratic Knowledge to Subquadratic Mode…☆116Updated last year