yushuiwx / Mixture-of-LoRA-Experts
☆30Updated 4 months ago
Alternatives and similar repositories for Mixture-of-LoRA-Experts:
Users that are interested in Mixture-of-LoRA-Experts are comparing it to the libraries listed below
- ☆132Updated 9 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆133Updated last month
- ☆72Updated 10 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆35Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆188Updated 4 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆160Updated 9 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆156Updated 8 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆65Updated 2 months ago
- ☆192Updated 6 months ago
- AdaMoLE: Adaptive Mixture of LoRA Experts☆27Updated 6 months ago
- [EMNLP 2024] mDPO: Conditional Preference Optimization for Multimodal Large Language Models.☆72Updated 5 months ago
- ☆99Updated 9 months ago
- ☆172Updated 9 months ago
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆34Updated 3 months ago
- The official code repository for PRMBench.