[ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization
☆24Oct 5, 2025Updated 6 months ago
Alternatives and similar repositories for Drop-Upcycling
Users that are interested in Drop-Upcycling are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Solving Token Gradient Conflict in Mixture-of-Experts for Large Vision-Language Model☆13Feb 11, 2025Updated last year
- CRAI is a multimodal large language model based on the Mixture of Experts (MoE) architecture, supporting text and image cross-modal tasks…☆16Apr 29, 2025Updated last year
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated last year
- [ICML 2025 Oral] Mixture of Lookup Experts☆72Dec 3, 2025Updated 4 months ago
- Scaling Laws for Mixture of Experts Models☆15Feb 25, 2025Updated last year
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICML 2025] Code for "R2-T2: Re-Routing in Test-Time for Multimodal Mixture-of-Experts"☆19Mar 10, 2025Updated last year
- Official implementation of "Mixture of Experts Meets Prompt-Based Continual Learning" (NeurIPS 2024)☆45Aug 1, 2025Updated 8 months ago
- Mixture-of-Experts Multimodal Variational Autoencoder☆15Jul 3, 2025Updated 9 months ago
- The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"☆19Jan 25, 2025Updated last year
- Official code for "Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping" (ICLR 2025)☆28Oct 25, 2025Updated 6 months ago
- Prototyp MegaScale-Infer: Serving Mixture-of-Experts at Scale with Disaggregated Expert Parallelism☆29Apr 4, 2025Updated last year
- ☆33Feb 4, 2026Updated 2 months ago
- Randomized algorithm class at CU☆15Jul 8, 2025Updated 9 months ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Codes for Hierarchical Time-Aware Mixture of Experts for Multi-Modal Sequential Recommendation (WWW2025)☆30Jun 17, 2025Updated 10 months ago
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆36Jan 18, 2025Updated last year
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆13Jun 7, 2023Updated 2 years ago
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆93Dec 3, 2024Updated last year
- [WWW 2025] Code for Modality Interactive Mixture-of-Experts for Fake News Detection☆37Jun 25, 2025Updated 10 months ago
- Implementation of the ICLR 2025 paper "Mixture of Experts Made Personalized: Federated Prompt Learning for Vision-Language Models"☆29Apr 2, 2025Updated last year
- [ACL 2026 Main] Analytical FFN-to-MoE Restructuring via Activation Pattern Analysis☆38Updated this week
- ☆18Aug 19, 2024Updated last year
- ☆14Jul 13, 2025Updated 9 months ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Jul 2, 2024Updated last year
- ☆12Jul 6, 2022Updated 3 years ago
- ☆14Feb 2, 2021Updated 5 years ago
- (ICLR 2026) Unveiling Super Experts in Mixture-of-Experts Large Language Models☆39Sep 25, 2025Updated 7 months ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆44May 10, 2023Updated 2 years ago
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆16Sep 7, 2024Updated last year
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆31Jun 7, 2024Updated last year
- [ACL2026 Findings] GRPO-CARE: Consistency-Aware Reinforcement Learning for Multimodal Reasoning☆81Jun 23, 2025Updated 10 months ago
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆113Dec 20, 2024Updated last year
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Jul 31, 2023Updated 2 years ago
- Implementations of a Mixture-of-Experts (MoE) architecture designed for research on large language models (LLMs) and scalable neural netw…☆68Apr 8, 2025Updated last year
- SynthRL: Scaling Visual Reasoning with Verifiable Data Synthesis☆69Jul 24, 2025Updated 9 months ago
- Sketching-based matrix computations for numpy arrays☆17Oct 29, 2019Updated 6 years ago
- Quantization of LLMs and benchmarking.☆10Apr 3, 2024Updated 2 years ago
- A PyTorch native platform for training generative AI models☆17Apr 21, 2026Updated last week
- some my implementation of content in PPA☆18Nov 3, 2020Updated 5 years ago