TUDB-Labs / MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
☆63Updated last week
Alternatives and similar repositories for MoE-PEFT:
Users that are interested in MoE-PEFT are comparing it to the libraries listed below
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆124Updated 5 months ago
- ☆122Updated 6 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆143Updated 6 months ago
- [ICLR 2024] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆33Updated last month
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆48Updated last month
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆107Updated 2 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆103Updated 10 months ago
- A Survey on the Honesty of Large Language Models☆51Updated last month
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".☆39Updated 2 months ago
- A Closer Look into Mixture-of-Experts in Large Language Models☆41Updated 5 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆70Updated 7 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆150Updated last month
- [ICLR 2025] SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights☆47Updated last week
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆15Updated 8 months ago