liuqidong07 / MOELoRA-peftView external linksLinks
[SIGIR'24] The official implementation code of MOELoRA.
☆188Jul 22, 2024Updated last year
Alternatives and similar repositories for MOELoRA-peft
Users that are interested in MOELoRA-peft are comparing it to the libraries listed below
Sorting:
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆400Apr 29, 2024Updated last year
- ☆176Jul 22, 2024Updated last year
- ☆273Oct 31, 2023Updated 2 years ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆133Mar 11, 2025Updated 11 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆36Aug 3, 2024Updated last year
- X-LoRA: Mixture of LoRA Experts☆263Aug 4, 2024Updated last year
- ☆125Jul 6, 2024Updated last year
- An Efficient "Factory" to Build Multiple LoRA Adapters☆370Feb 13, 2025Updated last year
- Token-level adaptation of LoRA matrices for downstream task generalization.☆15Apr 14, 2024Updated last year
- This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT☆22Aug 16, 2024Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Jul 1, 2025Updated 7 months ago
- [NeurIPS'24 Spotlight] The official implementation code of LLM-ESR.☆47Jun 27, 2024Updated last year
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Nov 17, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,003Dec 6, 2024Updated last year
- ☆64Dec 2, 2024Updated last year
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆668Jul 22, 2024Updated last year
- ☆16Nov 12, 2024Updated last year
- Butler 是一个用于自动化服务管理和任务调度的工具项目。☆15Updated this week
- ☆44Oct 1, 2024Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Apr 28, 2024Updated last year
- ☆196Jul 13, 2024Updated last year
- Federated Learning - PyTorch☆15Jun 27, 2021Updated 4 years ago
- Efficient and Effective Weight-Ensembling Mixture of Experts for Multi-Task Model Merging. Arxiv, 2024.☆16Oct 28, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Sep 20, 2024Updated last year
- QAQ: Quality Adaptive Quantization for LLM KV Cache☆55Mar 27, 2024Updated last year
- ☆19Jun 21, 2025Updated 7 months ago
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆473Apr 21, 2024Updated last year
- Official repo for NeurIPS'24 paper "WAGLE: Strategic Weight Attribution for Effective and Modular Unlearning in Large Language Models"☆18Dec 16, 2024Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- Generated geosite.dat based on Antifilter Community List☆24Feb 8, 2026Updated last week
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆83Dec 21, 2024Updated last year
- ☆415Nov 2, 2023Updated 2 years ago
- Reinforcement learning (RL) is an effective method to find reasoning pathways in incomplete knowledge graphs (KGs). To overcome the chall…☆23Oct 13, 2024Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Jul 17, 2024Updated last year
- ☆59Aug 22, 2024Updated last year
- [NeurIPS25] Official repo for "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning"☆42Oct 3, 2025Updated 4 months ago
- ☆31Aug 9, 2024Updated last year
- [NAACL 24 Oral] LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models☆39Jan 9, 2025Updated last year
- Code and data for QueryAgent(ACL 2024)☆20Dec 19, 2024Updated last year