lucidrains / coconut-pytorchLinks
Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch
☆181Updated 6 months ago
Alternatives and similar repositories for coconut-pytorch
Users that are interested in coconut-pytorch are comparing it to the libraries listed below
Sorting:
- ☆111Updated last year
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆233Updated 2 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆68Updated 9 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated 11 months ago
- ☆85Updated last month
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆136Updated 4 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆133Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- ☆200Updated 8 months ago
- ☆89Updated last year
- ☆125Updated 10 months ago
- This is the official repository for Inheritune.☆117Updated 10 months ago
- Language models scale reliably with over-training and on downstream tasks☆100Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆32Updated 5 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆243Updated 6 months ago
- Replicating O1 inference-time scaling laws☆91Updated last year
- General Reasoner: Advancing LLM Reasoning Across All Domains [NeurIPS25]☆210Updated 3 weeks ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆162Updated 8 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆96Updated last month
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆179Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆102Updated last year
- ☆75Updated last year
- ☆100Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- Understand and test language model architectures on synthetic tasks.☆246Updated 2 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆332Updated this week
- ☆204Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆180Updated 5 months ago
- Physics of Language Models, Part 4☆270Updated 2 weeks ago