TUDB-Labs / MoE-PEFTLinks
An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT
☆102Updated 3 months ago
Alternatives and similar repositories for MoE-PEFT
Users that are interested in MoE-PEFT are comparing it to the libraries listed below
Sorting:
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆163Updated 10 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆156Updated 3 months ago
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆120Updated this week
- ☆142Updated 11 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆73Updated 4 months ago
- Chain of Thoughts (CoT) is so hot! so long! We need short reasoning process!☆54Updated 2 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆65Updated 3 weeks ago
- ☆109Updated 3 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆64Updated 6 months ago
- ACL'2025: SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. and preprint: SoftCoT++: Test-Time Scaling with Soft Chain-of…☆28Updated 3 weeks ago
- [ICLR 2025 Workshop] "Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models"☆25Updated last week
- A Sober Look at Language Model Reasoning☆74Updated last week
- [NeurIPS'24] Official code for *🎯DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving*☆108Updated 6 months ago
- Official repository for paper: O1-Pruner: Length-Harmonizing Fine-Tuning for O1-Like Reasoning Pruning☆82Updated 4 months ago
- ☆65Updated 2 months ago
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆222Updated last month
- ☆119Updated last month
- [ACL'25 Oral] What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆65Updated this week
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆88Updated 8 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆168Updated 11 months ago
- ☆116Updated 3 weeks ago
- Code for "CREAM: Consistency Regularized Self-Rewarding Language Models", ICLR 2025.☆22Updated 4 months ago
- ☆139Updated last month
- ☆203Updated 4 months ago
- paper list, tutorial, and nano code snippet for Diffusion Large Language Models.☆75Updated this week
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆41Updated 11 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆52Updated 10 months ago
- [ICML'25] Our study systematically investigates massive values in LLMs' attention mechanisms. First, we observe massive values are concen…☆73Updated this week
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆122Updated 7 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆110Updated 4 months ago