TUDB-Labs / MoE-PEFTLinks
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
☆99Updated 2 months ago
Alternatives and similar repositories for MoE-PEFT
Users that are interested in MoE-PEFT are comparing it to the libraries listed below
Sorting:
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆162Updated 9 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆145Updated 2 months ago
- ☆138Updated 10 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆97Updated 3 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆120Updated 7 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆56Updated this week
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆69Updated 3 months ago
- ☆105Updated 2 months ago
- ☆131Updated 3 weeks ago
- [SIGIR'24] The official implementation code of MOELoRA.☆167Updated 10 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆96Updated last week
- ☆64Updated last month
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆84Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆123Updated 2 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆53Updated 2 months ago
- [ICLR 2025] SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆49Updated 3 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆70Updated 2 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆213Updated 3 weeks ago
- [arXiv 2025] Efficient Reasoning Models: A Survey☆166Updated last week
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆86Updated 6 months ago
- ☆83Updated last month
- ☆24Updated 2 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆110Updated last year
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆79Updated 3 months ago
- Code for ICLR 2025 Paper "What is Wrong with Perplexity for Long-context Language Modeling?"☆81Updated 3 weeks ago
- ☆107Updated 2 weeks ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆52Updated 7 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆85Updated 7 months ago
- ☆198Updated 7 months ago
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆64Updated 3 months ago