google-deepmind / language_modeling_is_compressionLinks
☆157Updated last year
Alternatives and similar repositories for language_modeling_is_compression
Users that are interested in language_modeling_is_compression are comparing it to the libraries listed below
Sorting:
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆142Updated last year
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- ☆72Updated last year
- ☆107Updated last year
- Physics of Language Models, Part 4☆248Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆241Updated 4 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆184Updated last year
- ☆91Updated 7 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆97Updated 10 months ago
- ☆101Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆178Updated 4 months ago
- Replicating O1 inference-time scaling laws☆90Updated 10 months ago
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆57Updated last year
- ☆86Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆230Updated 5 months ago
- Token Omission Via Attention☆127Updated last year
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆78Updated 9 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆102Updated last week
- Stick-breaking attention☆60Updated 3 months ago
- ☆195Updated 6 months ago
- Understand and test language model architectures on synthetic tasks.☆233Updated 3 weeks ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆215Updated last year
- Simple and efficient pytorch-native transformer training and inference (batched)☆78Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆130Updated 10 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆148Updated 7 months ago
- Code for "Reasoning to Learn from Latent Thoughts"☆121Updated 6 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆129Updated last year
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆140Updated 7 months ago
- Fast and memory-efficient exact attention☆71Updated 7 months ago