google-deepmind / language_modeling_is_compressionLinks
☆168Updated last year
Alternatives and similar repositories for language_modeling_is_compression
Users that are interested in language_modeling_is_compression are comparing it to the libraries listed below
Sorting:
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆144Updated last year
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 6 months ago
- ☆107Updated last year
- [NeurIPS'24 Spotlight] Observational Scaling Laws☆59Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆232Updated 2 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆187Updated last year
- ☆111Updated last year
- ☆205Updated last week
- Replicating O1 inference-time scaling laws☆91Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆121Updated last year
- [ICLR 2025] Code for the paper "Beyond Autoregression: Discrete Diffusion for Complex Reasoning and Planning"☆86Updated 10 months ago
- Physics of Language Models, Part 4☆280Updated 2 weeks ago
- ☆83Updated 2 years ago
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆142Updated 10 months ago
- Understand and test language model architectures on synthetic tasks.☆247Updated 3 months ago
- The code for creating the iGSM datasets in papers "Physics of Language Models Part 2.1, Grade-School Math and the Hidden Reasoning Proces…☆78Updated 11 months ago
- ☆91Updated last year
- ☆75Updated last year
- ☆185Updated last year
- ☆101Updated 10 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆181Updated 6 months ago
- open-source code for paper: Retrieval Head Mechanistically Explains Long-Context Factuality☆224Updated last year
- ☆200Updated 8 months ago
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆57Updated last year
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆89Updated last year
- Code for studying the super weight in LLM☆121Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆107Updated 2 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 8 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆152Updated 9 months ago