TUDB-Labs / MoE-PEFT
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
☆97Updated 2 months ago
Alternatives and similar repositories for MoE-PEFT
Users that are interested in MoE-PEFT are comparing it to the libraries listed below
Sorting:
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆161Updated 8 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆141Updated 2 months ago
- ☆134Updated 9 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆89Updated 3 months ago
- ☆97Updated 2 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆46Updated last month
- ☆79Updated 3 weeks ago
- ☆101Updated 10 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆67Updated 3 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆162Updated 9 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆80Updated last week
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆109Updated last year
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆69Updated last month
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆116Updated last month
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆46Updated 4 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆119Updated 6 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆52Updated last month
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆72Updated 7 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆47Updated 2 months ago
- ☆99Updated last week
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆19Updated 2 months ago
- ☆194Updated 6 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆202Updated this week
- ☆106Updated last week
- ☆29Updated last year
- ☆24Updated 11 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 7 months ago
- Code for "A Sober Look at Progress in Language Model Reasoning" paper☆45Updated this week
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆38Updated 7 months ago