State-of-the-art Parameter-Efficient MoE Fine-tuning Method
☆203Aug 22, 2024Updated last year
Alternatives and similar repositories for MixLoRA
Users that are interested in MixLoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆136Mar 11, 2025Updated last year
- accepted by ieee sensors journal☆33Aug 30, 2020Updated 5 years ago
- The codes for 'Progressive cross-primitive consistency for open-world compositional zero-shot learning'☆31Mar 21, 2024Updated 2 years ago
- Codes for Three-stream Interaction Decoder Network for RGB-Thermal Salient Object Detection☆27May 12, 2022Updated 3 years ago
- The codes for 'Non-Exemplar Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement'☆29Mar 21, 2024Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆33Jun 25, 2022Updated 3 years ago
- ☆27Oct 13, 2022Updated 3 years ago
- ☆35Dec 14, 2021Updated 4 years ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆401Apr 29, 2024Updated last year
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆31Aug 9, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆191Jul 22, 2024Updated last year
- An Efficient "Factory" to Build Multiple LoRA Adapters☆374Feb 13, 2025Updated last year
- https://arxiv.org/abs/2408.02032☆134Jan 16, 2025Updated last year
- ☆177Jul 22, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Official implementation of FouriScale (ECCV2024)☆159Jul 27, 2024Updated last year
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆156Jul 7, 2025Updated 8 months ago
- X-LoRA: Mixture of LoRA Experts☆267Aug 4, 2024Updated last year
- ☆66Dec 2, 2024Updated last year
- [EMNLP'24] MedAdapter: Efficient Test-Time Adaptation of Large Language Models Towards Medical Reasoning☆36Dec 26, 2024Updated last year
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆84Oct 21, 2025Updated 5 months ago
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆28Oct 9, 2025Updated 5 months ago
- [CVPR 2023] Diversity-Aware Meta Visual Prompting☆84Nov 30, 2023Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- NordVPN Special Discount Offer • AdSave on top-rated NordVPN 1 or 2-year plans with secure browsing, privacy protection, and support for for all major platforms.
- ☆17May 2, 2024Updated last year
- ☆274Oct 31, 2023Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆37Apr 2, 2025Updated 11 months ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆335Oct 14, 2025Updated 5 months ago
- ☆124Dec 9, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆36Aug 3, 2024Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Nov 14, 2024Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆101Oct 28, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ECCV 2024] FairDomain: Achieving Fairness in Cross-Domain Medical Image Segmentation and Classification☆38Jan 2, 2025Updated last year
- ☆126Jul 6, 2024Updated last year
- ☆10Apr 16, 2024Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆103Nov 21, 2024Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆81Dec 27, 2025Updated 2 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆290Sep 16, 2024Updated last year