lucidrains / coconut-pytorchLinks
Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch
☆178Updated last month
Alternatives and similar repositories for coconut-pytorch
Users that are interested in coconut-pytorch are comparing it to the libraries listed below
Sorting:
- ☆99Updated 10 months ago
- Physics of Language Models, Part 4☆67Updated this week
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆226Updated 2 months ago
- ☆187Updated 3 months ago
- Some preliminary explorations of Mamba's context scaling.☆216Updated last year
- ☆82Updated 11 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆160Updated 3 months ago
- A Large-Scale, High-Quality Math Dataset for Reinforcement Learning in Language Models☆59Updated 5 months ago
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆127Updated 11 months ago
- This is the official repository for Inheritune.☆112Updated 5 months ago
- AnchorAttention: Improved attention for LLMs long-context training☆212Updated 6 months ago
- ☆82Updated 6 months ago
- 📖 This is a repository for organizing papers, codes, and other resources related to Latent Reasoning.☆161Updated last week
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆84Updated 7 months ago
- ☆117Updated 5 months ago
- [COLM 2025] Code for Paper: Learning Adaptive Parallel Reasoning with Language Models☆116Updated 3 months ago
- [ICLR2025] DiffuGPT and DiffuLLaMA: Scaling Diffusion Language Models via Adaptation from Autoregressive Models☆251Updated 2 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆92Updated last week
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆263Updated last week
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆149Updated last month
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last month
- General Reasoner: Advancing LLM Reasoning Across All Domains☆156Updated last month
- ☆78Updated 5 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆172Updated last year
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆70Updated last month
- ☆135Updated 8 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆64Updated last year
- Understand and test language model architectures on synthetic tasks.☆221Updated 2 weeks ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆167Updated last year