lucidrains / coconut-pytorchLinks
Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch
☆179Updated 2 months ago
Alternatives and similar repositories for coconut-pytorch
Users that are interested in coconut-pytorch are comparing it to the libraries listed below
Sorting:
- ☆104Updated 11 months ago
- Some preliminary explorations of Mamba's context scaling.☆217Updated last year
- ☆188Updated 4 months ago
- ☆85Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆229Updated 4 months ago
- ☆122Updated 6 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆63Updated 6 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆128Updated last year
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆126Updated 3 weeks ago
- Language models scale reliably with over-training and on downstream tasks☆98Updated last year
- Understand and test language model architectures on synthetic tasks.☆224Updated last month
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆240Updated 3 months ago
- Physics of Language Models, Part 4☆242Updated last month
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆31Updated 2 months ago
- ☆86Updated 7 months ago
- Repo for "Z1: Efficient Test-time Scaling with Code"☆64Updated 5 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 4 months ago
- This is the official repository for Inheritune.☆113Updated 7 months ago
- Replicating O1 inference-time scaling laws☆89Updated 9 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆212Updated 7 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆157Updated 2 months ago
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆177Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆171Updated 2 months ago
- ☆84Updated 6 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆98Updated last month
- [NeurIPS 2024] Code for the paper "Diffusion of Thoughts: Chain-of-Thought Reasoning in Diffusion Language Models"☆177Updated 6 months ago
- Code for NeurIPS'24 paper 'Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization'☆229Updated last month
- ☆135Updated 10 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆89Updated 8 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆280Updated last week