ZihanWang314 / CoE
Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models
☆151Updated 2 weeks ago
Alternatives and similar repositories for CoE:
Users that are interested in CoE are comparing it to the libraries listed below
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆162Updated last week
- Research Code for preprint "Optimizing Test-Time Compute via Meta Reinforcement Finetuning".☆78Updated 2 weeks ago
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆150Updated 3 months ago
- Simple extension on vLLM to help you speed up reasoning model without training.☆139Updated 3 weeks ago
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate"☆131Updated last month
- Benchmark and research code for the paper SWEET-RL Training Multi-Turn LLM Agents onCollaborative Reasoning Tasks☆140Updated this week
- Towards Economical Inference: Enabling DeepSeek's Multi-Head Latent Attention in Any Transformer-based LLMs☆149Updated last week
- Efficient triton implementation of Native Sparse Attention.☆127Updated this week
- ☆68Updated this week
- ☆72Updated this week
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆130Updated 9 months ago
- Advancing Language Model Reasoning through Reinforcement Learning and Inference Scaling☆95Updated 2 months ago
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆154Updated 9 months ago
- ☆262Updated 2 weeks ago
- The official implementation of the paper <MoA: Mixture of Sparse Attention for Automatic Large Language Model Compression>☆122Updated 3 months ago
- A lightweight reproduction of DeepSeek-R1-Zero with indepth analysis of self-reflection behavior.☆216Updated last week
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆166Updated last week
- Offical Repo for "Programming Every Example: Lifting Pre-training Data Quality Like Experts at Scale"☆229Updated last month
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆311Updated 3 months ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆121Updated 2 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆65Updated 3 months ago
- TransMLA: Multi-Head Latent Attention Is All You Need☆221Updated 3 weeks ago
- ☆76Updated 2 months ago
- ☆87Updated 6 months ago
- A framework to study AI models in Reasoning, Alignment, and use of Memory (RAM).☆209Updated last week
- [ICLR 2025] Benchmarking Agentic Workflow Generation☆63Updated last month
- Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning☆161Updated last week
- From Hours to Minutes: Lossless Acceleration of Ultra Long Sequence Generation☆83Updated last week
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆170Updated 3 weeks ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆103Updated 2 weeks ago