lucidrains / coconut-pytorchLinks
Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch
☆182Updated 7 months ago
Alternatives and similar repositories for coconut-pytorch
Users that are interested in coconut-pytorch are comparing it to the libraries listed below
Sorting:
- Some preliminary explorations of Mamba's context scaling.☆218Updated last year
- ☆112Updated last year
- ☆91Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆236Updated 3 months ago
- ☆85Updated 2 months ago
- ☆203Updated 9 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆71Updated 11 months ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆33Updated 7 months ago
- ☆123Updated 11 months ago
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆121Updated 11 months ago
- Language models scale reliably with over-training and on downstream tasks☆99Updated last year
- AnchorAttention: Improved attention for LLMs long-context training☆213Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- ☆75Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆97Updated 2 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆63Updated last year
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Updated 3 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆246Updated 7 months ago
- Replicating O1 inference-time scaling laws☆91Updated last year
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆186Updated last week
- ☆26Updated last year
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆138Updated last month
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆88Updated last year
- Understand and test language model architectures on synthetic tasks.☆251Updated 2 weeks ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆147Updated last year
- [NeurIPS 2024] Can LLMs Learn by Teaching for Better Reasoning? A Preliminary Study☆59Updated last year
- ☆207Updated 2 weeks ago