Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models
☆231Nov 4, 2025Updated 5 months ago
Alternatives and similar repositories for CoE
Users that are interested in CoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆19Mar 3, 2025Updated last year
- KernelBench v2: Can LLMs Write GPU Kernels? - Benchmark with Torch -> Triton (and more!) problems☆23Jul 4, 2025Updated 9 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆113Dec 20, 2024Updated last year
- Measuring the Signal to Noise Ratio in Language Model Evaluation☆29Aug 19, 2025Updated 8 months ago
- Muon is Scalable for LLM Training☆1,469Aug 3, 2025Updated 8 months ago
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- ☆135Jun 6, 2025Updated 10 months ago
- Code for Blog Post: Can Better Cold-Start Strategies Improve RL Training for LLMs?☆20Mar 9, 2025Updated last year
- Fully open reproduction of DeepSeek-R1☆11Mar 24, 2025Updated last year
- [COLM 2025] "C3PO: Critical-Layer, Core-Expert, Collaborative Pathway Optimization for Test-Time Expert Re-Mixing"☆20Apr 9, 2025Updated last year
- Expert Specialized Fine-Tuning☆734May 22, 2025Updated 11 months ago
- Repository for the paper: Aligning LLMs to Ask Good Questions A Case Study in Clinical Reasoning☆18Feb 21, 2025Updated last year
- The loss landscape of Large Language Models resemble basin!☆37Jul 8, 2025Updated 9 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆110Oct 11, 2025Updated 6 months ago
- Kinetics: Rethinking Test-Time Scaling Laws☆87Jul 11, 2025Updated 9 months ago
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- ☆32Jul 8, 2024Updated last year
- RAGEN leverages reinforcement learning to train LLM reasoning agents in interactive, stochastic environments.☆2,642Apr 14, 2026Updated 2 weeks ago
- ☆228Oct 27, 2025Updated 6 months ago
- Inverse Scaling in Test-Time Compute☆25Dec 3, 2025Updated 4 months ago
- Codebase the paper "The Remarkable Robustness of LLMs: Stages of Inference?"☆19Jun 11, 2025Updated 10 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆71Jul 30, 2024Updated last year
- An experiment to see if chatgpt can improve the output of the stanford alpaca dataset☆12Mar 29, 2023Updated 3 years ago
- ☆114Feb 25, 2025Updated last year
- UFT: Unifying Supervised and Reinforcement Fine-Tuning☆29Jun 30, 2025Updated 10 months ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆993Feb 5, 2026Updated 2 months ago
- ☆70Jul 8, 2025Updated 9 months ago
- ☆20Apr 17, 2023Updated 3 years ago
- ☆130Feb 4, 2026Updated 2 months ago
- [ICLR 2025] COAT: Compressing Optimizer States and Activation for Memory-Efficient FP8 Training☆261Aug 9, 2025Updated 8 months ago
- Repository for ''Contextualizing MLP-Mixers Spatiotemporally for Urban Data Forecast at Scale''☆14Apr 30, 2024Updated 2 years ago
- Official implementation for DenseMixer: Improving MoE Post-Training with Precise Router Gradient☆67Aug 3, 2025Updated 8 months ago
- Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication☆21Mar 21, 2024Updated 2 years ago
- ☆24Sep 25, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Minimalistic large language model 3D-parallelism training☆2,663Apr 7, 2026Updated 3 weeks ago
- [ICLR 2025] Monet: Mixture of Monosemantic Experts for Transformers☆78Jun 23, 2025Updated 10 months ago
- CIKM 23 Oral - HoLe: Homophily-enhanced Structure Learning for Graph Clustering☆11Feb 29, 2024Updated 2 years ago
- The repository contains code for Adaptive Data Optimization☆36Dec 9, 2024Updated last year
- ☆15Nov 3, 2022Updated 3 years ago
- ☆29Sep 10, 2025Updated 7 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,108Apr 3, 2025Updated last year