[ICLR 2025] Drop-Upcycling: Training Sparse Mixture of Experts with Partial Re-initialization
☆25Oct 5, 2025Updated 6 months ago
Alternatives and similar repositories for Drop-Upcycling
Users that are interested in Drop-Upcycling are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Official PyTorch implementation of CD-MOE☆12Mar 18, 2026Updated 3 weeks ago
- MoE-Visualizer is a tool designed to visualize the selection of experts in Mixture-of-Experts (MoE) models.☆16Apr 8, 2025Updated last year
- [ICML 2025 Oral] Mixture of Lookup Experts☆72Dec 3, 2025Updated 4 months ago
- Scaling Laws for Mixture of Experts Models☆15Feb 25, 2025Updated last year
- Mamba R1 represents a novel architecture that combines the efficiency of Mamba's state space models with the scalability of Mixture of Ex…☆25Oct 13, 2025Updated 5 months ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- Mixture-of-Experts Multimodal Variational Autoencoder☆15Jul 3, 2025Updated 9 months ago
- The code for "MoPE: Mixture of Prefix Experts for Zero-Shot Dialogue State Tracking"☆19Jan 25, 2025Updated last year
- Official code for "Efficient Residual Learning with Mixture-of-Experts for Universal Dexterous Grasping" (ICLR 2025)☆29Oct 25, 2025Updated 5 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆61Feb 7, 2025Updated last year
- Codes for Hierarchical Time-Aware Mixture of Experts for Multi-Modal Sequential Recommendation (WWW2025)☆28Jun 17, 2025Updated 9 months ago
- Randomized algorithm class at CU☆15Jul 8, 2025Updated 9 months ago
- [ICLR 2025] Linear Combination of Saved Checkpoints Makes Consistency and Diffusion Models Better☆16Feb 15, 2025Updated last year
- Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)☆36Jan 18, 2025Updated last year
- Pytorch implementation of our paper accepted by ICML 2023 -- "Bi-directional Masks for Efficient N:M Sparse Training"☆13Jun 7, 2023Updated 2 years ago
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆93Dec 3, 2024Updated last year
- [WWW 2025] Code for Modality Interactive Mixture-of-Experts for Fake News Detection☆35Jun 25, 2025Updated 9 months ago
- Implementation for the paper: CMoE: Fast Carving of Mixture-of-Experts for Efficient LLM Inference☆36Mar 6, 2025Updated last year
- [ICLR 2025] MoE++: Accelerating Mixture-of-Experts Methods with Zero-Computation Experts☆264Oct 16, 2024Updated last year
- ☆18Aug 19, 2024Updated last year
- XFT: Unlocking the Power of Code Instruction Tuning by Simply Merging Upcycled Mixture-of-Experts☆35Jul 2, 2024Updated last year
- ☆14Feb 2, 2021Updated 5 years ago
- Implementation of the "the first large-scale multimodal mixture of experts models." from the paper: "Multimodal Contrastive Learning with…☆36Mar 22, 2026Updated 2 weeks ago
- The implementation for MLSys 2023 paper: "Cuttlefish: Low-rank Model Training without All The Tuning"☆45May 10, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting with the flexibility to host WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Cloudways by DigitalOcean.
- AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers☆48Oct 21, 2022Updated 3 years ago
- Official PyTorch implementation of The Linear Attention Resurrection in Vision Transformer☆16Sep 7, 2024Updated last year
- Kaggleのshopeeコンペのリポジトリ☆11Jun 7, 2021Updated 4 years ago
- ☆81Jun 23, 2025Updated 9 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆31Jun 7, 2024Updated last year
- [ICLR2025] Codebase for "ReMoE: Fully Differentiable Mixture-of-Experts with ReLU Routing", built on Megatron-LM.☆112Dec 20, 2024Updated last year
- The official implementation of "FreqMoE: Enhancing Time Series Forecasting through Frequency Decomposition Mixture of Experts"☆45Mar 17, 2025Updated last year
- [CVPR 2025] CL-MoE: Enhancing Multimodal Large Language Model with Dual Momentum Mixture-of-Experts for Continual Visual Question Answeri…☆52Jun 16, 2025Updated 9 months ago
- [ICLR 2023] Eva: Practical Second-order Optimization with Kronecker-vectorized Approximation☆12Jul 31, 2023Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Reference implementation of models from Nyonic Model Factory☆12May 13, 2024Updated last year
- Sketching-based matrix computations for numpy arrays☆17Oct 29, 2019Updated 6 years ago
- Clustered Compositional Embeddings☆12Oct 25, 2023Updated 2 years ago
- Beyond Pixels: Semi-Supervised Semantic Segmentation with a Multi-scale Patch-based Multi-Label Classifier (Accepted ECCV 2024)☆10May 6, 2025Updated 11 months ago
- Residual vector quantization for KV cache compression in large language model☆12Oct 22, 2024Updated last year
- Quantization of LLMs and benchmarking.☆10Apr 3, 2024Updated 2 years ago
- A PyTorch native platform for training generative AI models☆16Nov 18, 2025Updated 4 months ago