LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
☆403Apr 29, 2024Updated last year
Alternatives and similar repositories for LoRAMoE
Users that are interested in LoRAMoE are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆177Jul 22, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆192Jul 22, 2024Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆235Dec 3, 2024Updated last year
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆83Oct 21, 2025Updated 5 months ago
- NordVPN Threat Protection Pro™ • AdTake your cybersecurity to the next level. Block phishing, malware, trackers, and ads. Lightweight app that works with all browsers.
- ☆275Oct 31, 2023Updated 2 years ago
- [SIGIR'24] The official implementation code of MOELoRA.☆36Aug 3, 2024Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆136Mar 11, 2025Updated last year
- ☆125Jul 6, 2024Updated last year
- X-LoRA: Mixture of LoRA Experts☆268Aug 4, 2024Updated last year
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆29Oct 9, 2025Updated 6 months ago
- ☆66Dec 2, 2024Updated last year
- Mixture of Lora Experts☆10Apr 7, 2024Updated 2 years ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,000Dec 6, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Oct 11, 2024Updated last year
- This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT☆22Aug 16, 2024Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆39Jan 13, 2025Updated last year
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆32Aug 9, 2024Updated last year
- ☆30Sep 28, 2023Updated 2 years ago
- ☆198Jul 13, 2024Updated last year
- ☆26Jan 20, 2025Updated last year
- An Efficient "Factory" to Build Multiple LoRA Adapters☆375Feb 13, 2025Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆46Jul 1, 2025Updated 9 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆374Jun 1, 2023Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆670Jul 22, 2024Updated last year
- ☆29May 24, 2024Updated last year
- ☆18Nov 10, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,673Mar 8, 2024Updated 2 years ago
- [ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆91Oct 15, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,895Apr 2, 2026Updated last week
- ☆30Nov 5, 2024Updated last year
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆955Mar 24, 2026Updated 2 weeks ago
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,315Updated this week
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆84Dec 21, 2024Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,273Dec 8, 2024Updated last year
- ☆232Jun 24, 2024Updated last year
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 10 months ago