State-of-the-art Parameter-Efficient MoE Fine-tuning Method
☆203Aug 22, 2024Updated last year
Alternatives and similar repositories for MixLoRA
Users that are interested in MixLoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆136Mar 11, 2025Updated last year
- accepted by ieee sensors journal☆33Aug 30, 2020Updated 5 years ago
- This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT☆22Aug 16, 2024Updated last year
- The codes for 'Non-Exemplar Online Class-incremental Continual Learning via Dual-prototype Self-augment and Refinement'☆30Mar 21, 2024Updated 2 years ago
- ☆33Jun 25, 2022Updated 3 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- ☆27Oct 13, 2022Updated 3 years ago
- ☆35Dec 14, 2021Updated 4 years ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆403Apr 29, 2024Updated last year
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆32Aug 9, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆192Jul 22, 2024Updated last year
- An Efficient "Factory" to Build Multiple LoRA Adapters☆375Feb 13, 2025Updated last year
- ☆177Jul 22, 2024Updated last year
- [EMNLP'24] Code and data for paper "Med-MoE: Mixture of Domain-Specific Experts for Lightweight Medical Vision-Language Models"☆157Jul 7, 2025Updated 9 months ago
- X-LoRA: Mixture of LoRA Experts☆270Aug 4, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [EMNLP'24] MedAdapter: Efficient Test-Time Adaptation of Large Language Models Towards Medical Reasoning☆38Dec 26, 2024Updated last year
- ☆66Dec 2, 2024Updated last year
- Mixture of Lora Experts☆10Apr 7, 2024Updated 2 years ago
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆29Oct 9, 2025Updated 6 months ago
- [CVPR 2023] Diversity-Aware Meta Visual Prompting☆84Nov 30, 2023Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- ☆17May 2, 2024Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆237Dec 3, 2024Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆39Apr 2, 2025Updated last year
- Get/modify variable's value in another Linux running process☆10Mar 9, 2026Updated last month
- ☆14Jun 6, 2023Updated 2 years ago
- [CVPR'24] HallusionBench: You See What You Think? Or You Think What You See? An Image-Context Reasoning Benchmark Challenging for GPT-4V(…☆336Oct 14, 2025Updated 6 months ago
- ☆128Dec 9, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆37Aug 3, 2024Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Nov 14, 2024Updated last year
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆294Jun 7, 2023Updated 2 years ago
- ☆125Jul 6, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆10Apr 16, 2024Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆103Oct 28, 2024Updated last year
- Papers about Hallucination in Multi-Modal Large Language Models (MLLMs)☆103Nov 21, 2024Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆82Dec 27, 2025Updated 3 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- SlowFast-LLaVA: A Strong Training-Free Baseline for Video Large Language Models☆291Sep 16, 2024Updated last year
- ☆26Jan 20, 2025Updated last year