LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment
☆401Apr 29, 2024Updated last year
Alternatives and similar repositories for LoRAMoE
Users that are interested in LoRAMoE are comparing it to the libraries listed below
Sorting:
- [SIGIR'24] The official implementation code of MOELoRA.☆188Jul 22, 2024Updated last year
- ☆176Jul 22, 2024Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆232Dec 3, 2024Updated last year
- ☆274Oct 31, 2023Updated 2 years ago
- ☆126Jul 6, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- ☆65Dec 2, 2024Updated last year
- [ ICLR 2025 ] Making LLMs More Effective with Hierarchical Mixture of LoRA Experts☆27Oct 9, 2025Updated 4 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Oct 11, 2024Updated last year
- This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT☆22Aug 16, 2024Updated last year
- ☆30Sep 28, 2023Updated 2 years ago
- ☆196Jul 13, 2024Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Jan 13, 2025Updated last year
- ☆30Nov 5, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆669Jul 22, 2024Updated last year
- Multimodal Instruction Tuning with Conditional Mixture of LoRA (ACL 2024)☆31Aug 9, 2024Updated last year
- AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).☆369Jun 1, 2023Updated 2 years ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Jul 1, 2025Updated 8 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated last year
- ☆18Nov 10, 2024Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆83Dec 21, 2024Updated last year
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,037Feb 21, 2026Updated last week
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆409Jun 30, 2025Updated 8 months ago
- [ICML 2024] Junk DNA Hypothesis: A Task-Centric Angle of LLM Pre-trained Weights through Sparsity; Lu Yin*, Ajay Jaiswal*, Shiwei Liu, So…☆16Apr 21, 2025Updated 10 months ago
- ☆29May 24, 2024Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆172Jan 29, 2026Updated last month
- A collection of AWESOME things about mixture-of-experts☆1,266Dec 8, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,678Updated this week
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆130Jul 10, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- [ICLR 2025 Oral] "Your Mixture-of-Experts LLM Is Secretly an Embedding Model For Free"☆90Oct 15, 2024Updated last year
- ☆25Jan 20, 2025Updated last year
- [ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation☆936Oct 1, 2024Updated last year
- [CSUR 2025] Continual Learning of Large Language Models: A Comprehensive Survey☆522Dec 23, 2025Updated 2 months ago
- [ICLR 2024] Analyzing and Mitigating Object Hallucination in Large Vision-Language Models☆155Apr 30, 2024Updated last year
- [ICLR 2025 Oral🔥] SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning☆81Jun 27, 2025Updated 8 months ago
- [NeurIPS 2023 Datasets and Benchmarks Track] LAMM: Multi-Modal Large Language Models and Applications as AI Agents☆318Apr 16, 2024Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago