LibMoE: A LIBRARY FOR COMPREHENSIVE BENCHMARKING MIXTURE OF EXPERTS IN LARGE LANGUAGE MODELS
☆46Jan 10, 2026Updated 2 months ago
Alternatives and similar repositories for LibMoE
Users that are interested in LibMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ICLR 2025] 🚀 CodeMMLU Evaluator: A framework for evaluating LM models on CodeMMLU MCQs benchmark.☆29Apr 21, 2025Updated 11 months ago
- ☆10Mar 14, 2021Updated 5 years ago
- ☆67Feb 8, 2022Updated 4 years ago
- Official Release of NeurIPS 2024 paper "Slot State Space Models"☆11Mar 22, 2025Updated last year
- Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model☆13Feb 11, 2025Updated last year
- [NAACL 2024] Z-GMOT: Zero-shot Generic Multiple Object Tracking☆13May 3, 2024Updated last year
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated last year
- ☆15Jan 24, 2025Updated last year
- ☆11Jul 25, 2021Updated 4 years ago
- [ICRA 2024] Language-Conditioned Affordance-Pose Detection in 3D Point Clouds☆51Jan 10, 2025Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"☆19Jan 25, 2025Updated last year
- The code of 《M4: Multi-Proxy Multi-Gate Mixture of Experts Network for Multiple Instance Learning in Histopathology Image Analysis》☆14Mar 31, 2025Updated 11 months ago
- [CVPR 2023] Bridging the Gap between Model Explanations in Partially Annotated Multi-label Classification☆21Oct 12, 2023Updated 2 years ago
- ☆12Apr 17, 2023Updated 2 years ago
- CRAI is a multimodal large language model based on the Mixture of Experts (MoE) architecture, supporting text and image cross-modal tasks…☆16Apr 29, 2025Updated 10 months ago
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated 11 months ago
- Official codes of the 1st place for The NVIDIA AI City Challenge 2023 - Track 2☆19Jul 25, 2023Updated 2 years ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆35Mar 6, 2025Updated last year
- ☆15Mar 20, 2025Updated last year
- Scaling Laws for Mixture of Experts Models☆15Feb 25, 2025Updated last year
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- [ACL 2023 Findings] Emergent Modularity in Pre-trained Transformers☆26Jun 7, 2023Updated 2 years ago
- ☆14Sep 7, 2022Updated 3 years ago
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 5 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆109Dec 20, 2024Updated last year
- Implementation for MomentumSMoE☆19Apr 19, 2025Updated 11 months ago
- Python implementation of the supervised graph prediction method proposed in http://arxiv.org/abs/2202.03813 using PyTorch library and POT…☆16Feb 25, 2022Updated 4 years ago
- [KernelGYM & Dr. Kernel] A distributed GPU environment and a collection of RL training methods to support RL for Kernel Generations☆148Feb 6, 2026Updated last month
- S-Chain: Structured Visual Chain-of-Thought For Medicine☆46Feb 10, 2026Updated last month
- Code for "Language Models Can Learn from Verbal Feedback Without Scalar Rewards"☆60Jan 5, 2026Updated 2 months ago
- [ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization☆25Oct 5, 2025Updated 5 months ago
- A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).☆24Dec 3, 2021Updated 4 years ago
- Testing LLMs reflection and planning capabilities with gym environments☆13Aug 30, 2024Updated last year
- Official code for "Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping" (ICLR 2025)☆29Oct 25, 2025Updated 4 months ago
- Source code for the paper "Improving Deep Metric Learning byDivide and Conquer"☆21Dec 10, 2021Updated 4 years ago
- Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism☆27Apr 4, 2025Updated 11 months ago
- Tree-Based Diffusion Schrödinger Bridge with Applications to Wasserstein Barycenters☆10Mar 5, 2024Updated 2 years ago
- Sparse Backpropagation for Mixture-of-Expert Training☆29Jul 2, 2024Updated last year