TUDB-Labs / MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
☆40Updated last month
Related projects ⓘ
Alternatives and complementary repositories for MoE-PEFT
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆90Updated 2 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".☆36Updated this week
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- ☆27Updated last year
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆14Updated 5 months ago
- [ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"☆95Updated 7 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 7 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆96Updated last week
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆21Updated 4 months ago
- ☆115Updated 3 months ago
- SuperCorrect: Supervising and Correcting Language Models with Error-Driven Insights☆34Updated 3 weeks ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆141Updated 4 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆25Updated 5 months ago
- A Survey on the Honesty of Large Language Models☆44Updated last month
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆30Updated last month
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆72Updated 8 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆67Updated 5 months ago
- Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆61Updated last week
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆69Updated 2 weeks ago
- ☆29Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆124Updated 3 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆32Updated 10 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆64Updated 5 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆30Updated 3 weeks ago
- Code for https://arxiv.org/abs/2401.17139 (NeurIPS 2024)☆22Updated this week
- One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning☆38Updated last year
- Codes for Merging Large Language Models☆24Updated 3 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆113Updated last week
- LongRecipe: Recipe for Efficient Long Context Generalization in Large Language Models☆66Updated 3 weeks ago