Applied-Machine-Learning-Lab / MOELoRA-peftLinks
[SIGIR'24] The official implementation code of MOELoRA.
☆36Updated last year
Alternatives and similar repositories for MOELoRA-peft
Users that are interested in MOELoRA-peft are comparing it to the libraries listed below
Sorting:
- [SIGIR'24] The official implementation code of MOELoRA.☆188Updated last year
- ☆175Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆395Updated last year
- ☆28Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆233Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆45Updated 7 months ago
- ☆185Updated 2 weeks ago
- 大模型进阶面经☆97Updated 9 months ago
- ☆196Updated last year
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆33Updated 11 months ago
- ☆57Updated 8 months ago
- Code for "Retaining Key Information under High Compression Rates: Query-Guided Compressor for LLMs" (ACL 2024)☆18Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆38Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆84Updated last year
- Awesome Low-Rank Adaptation☆59Updated 6 months ago
- An implementation of SEAL: Safety-Enhanced Aligned LLM fine-tuning via bilevel data selection.☆22Updated 11 months ago
- Awesome-Low-Rank-Adaptation☆128Updated last year
- [ICLR 2025] Language Imbalance Driven Rewarding for Multilingual Self-improving☆24Updated 5 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆69Updated 7 months ago
- ☆306Updated 7 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Updated last year
- This is the repo for the survey of Bias and Fairness in IR with LLMs.☆59Updated 5 months ago
- Resources and code for the Qilin dataset.☆63Updated 9 months ago
- ☆152Updated last year
- A block pruning framework for LLMs.☆27Updated 8 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆99Updated last year
- ☆125Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆145Updated last year
- ☆141Updated last month
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆201Updated 2 months ago