ZihanWang314 / CoELinks
Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models
☆167Updated last week
Alternatives and similar repositories for CoE
Users that are interested in CoE are comparing it to the libraries listed below
Sorting:
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆213Updated 2 weeks ago
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆166Updated last week
- Parallel Scaling Law for Language Model — Beyond Parameter and Inference Time Scaling☆345Updated 2 weeks ago
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆94Updated 2 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆152Updated this week
- Repo for "Z1: Efficient Test-time Scaling with Code"☆59Updated last month
- ☆77Updated last month
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆155Updated last month
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆151Updated last month
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆102Updated 4 months ago
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆207Updated 3 weeks ago
- Efficient triton implementation of Native Sparse Attention.☆155Updated last week
- Official repo for paper: "Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't"☆231Updated 3 weeks ago
- ☆80Updated 2 weeks ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆145Updated 2 months ago
- [NeurIPS 2024] Official Repository of The Mamba in the Llama: Distilling and Accelerating Hybrid Models☆221Updated last month
- Tina: Tiny Reasoning Models via LoRA☆245Updated this week
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks☆143Updated 8 months ago
- An Open Math Pre-trainng Dataset with 370B Tokens.☆87Updated last month
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆132Updated 11 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆160Updated 11 months ago
- official repository for “Reinforcement Learning for Reasoning in Large Language Models with One Training Example”☆223Updated last week
- Repo of paper "Free Process Rewards without Process Labels"☆149Updated 2 months ago
- ☆174Updated last month
- ☆201Updated 3 months ago
- SkyRL-v0: Train Real-World Long-Horizon Agents via Reinforcement Learning☆343Updated last week
- ☆96Updated last month
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆128Updated last week
- ☆93Updated 2 weeks ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆184Updated 2 months ago