lucidrains / coconut-pytorch
Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch
☆156Updated last month
Alternatives and similar repositories for coconut-pytorch:
Users that are interested in coconut-pytorch are comparing it to the libraries listed below
- ☆71Updated 6 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆196Updated 3 weeks ago
- ☆82Updated 4 months ago
- Language models scale reliably with over-training and on downstream tasks☆96Updated 10 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆149Updated 2 months ago
- Code for exploring Based models from "Simple linear attention language models balance the recall-throughput tradeoff"☆221Updated this week
- Understand and test language model architectures on synthetic tasks.☆181Updated last month
- Implementation of CALM from the paper "LLM Augmented LLMs: Expanding Capabilities through Composition", out of Google Deepmind☆173Updated 5 months ago
- ☆95Updated 7 months ago
- ☆149Updated last week
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆118Updated 5 months ago
- The simplest, fastest repository for training/finetuning medium-sized GPTs.☆95Updated 3 months ago
- Multimodal language model benchmark, featuring challenging examples☆158Updated 2 months ago
- Some preliminary explorations of Mamba's context scaling.☆213Updated last year
- ☆125Updated last year
- Normalized Transformer (nGPT)☆152Updated 3 months ago
- This is the official repository for Inheritune.☆109Updated last week
- OpenCoconut implements a latent reasoning paradigm where we generate thoughts before decoding.☆167Updated last month
- Implementation of Infini-Transformer in Pytorch☆109Updated last month
- ☆181Updated this week
- Repo for "LoLCATs: On Low-Rank Linearizing of Large Language Models"☆215Updated 3 weeks ago
- ☆75Updated last month
- ☆58Updated 9 months ago
- A curated reading list of research in Adaptive Computation, Inference-Time Computation & Mixture of Experts (MoE).☆139Updated last month
- Token Omission Via Attention☆123Updated 4 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆184Updated 6 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆57Updated 3 weeks ago
- 🌾 OAT: A research-friendly framework for LLM online alignment, including preference learning, reinforcement learning, etc.☆194Updated last week
- Replicating O1 inference-time scaling laws☆82Updated 2 months ago