ZihanWang314 / CoELinks
Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models
☆220Updated 2 weeks ago
Alternatives and similar repositories for CoE
Users that are interested in CoE are comparing it to the libraries listed below
Sorting:
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆443Updated 4 months ago
- ☆84Updated 6 months ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆192Updated 3 months ago
- [EMNLP 2025 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆64Updated 5 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆229Updated 5 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆287Updated last week
- Tina: Tiny Reasoning Models via LoRA☆284Updated last week
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆180Updated 3 months ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆172Updated 2 months ago
- Esoteric Language Models☆99Updated 2 months ago
- The official repo of SynLogic: Synthesizing Verifiable Reasoning Data at Scale for Learning Logical Reasoning and Beyond☆165Updated 2 months ago
- ☆96Updated 2 weeks ago
- [NeurIPS 2025] Reinforcement Learning for Reasoning in Large Language Models with One Training Example☆360Updated this week
- Code for the paper: "Learning to Reason without External Rewards"☆355Updated 2 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆343Updated 9 months ago
- Official codebase for "Can 1B LLM Surpass 405B LLM? Rethinking Compute-Optimal Test-Time Scaling".☆271Updated 7 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆256Updated 4 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆171Updated last year
- [NeurIPS 2025] Simple extension on vLLM to help you speed up reasoning model without training.☆197Updated 4 months ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆161Updated 5 months ago
- Survey: A collection of AWESOME papers and resources on the latest research in Mixture of Experts.☆134Updated last year
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆260Updated last year
- repo for paper https://arxiv.org/abs/2504.13837☆196Updated 3 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆245Updated 4 months ago
- [ICML 2025] |TokenSwift: Lossless Acceleration of Ultra Long Sequence Generation☆113Updated 4 months ago
- Resources for our paper: "Agent-R: Training Language Model Agents to Reflect via Iterative Self-Training"☆160Updated 3 months ago
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆261Updated 4 months ago
- ☆86Updated 8 months ago
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆107Updated last week
- ☆198Updated 9 months ago