TUDB-Labs / MixLoRA
State-of-the-art Parameter-Efficient MoE Fine-tuning Method
☆156Updated 8 months ago
Alternatives and similar repositories for MixLoRA:
Users that are interested in MixLoRA are comparing it to the libraries listed below
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆92Updated last month
- ☆132Updated 9 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆133Updated last month
- [SIGIR'24] The official implementation code of MOELoRA.☆160Updated 9 months ago
- ☆90Updated 3 months ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆65Updated 2 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆87Updated 2 months ago
- ☆99Updated 9 months ago
- ☆93Updated last month
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆323Updated 11 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆118Updated 5 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆112Updated 2 weeks ago
- ☆144Updated 7 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆81Updated 10 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆188Updated 4 months ago
- ☆192Updated 6 months ago
- ☆172Updated 9 months ago
- ☆54Updated last week
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆81Updated 4 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆56Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆115Updated last month
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆139Updated 2 months ago
- ☆93Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆191Updated last month
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆63Updated last month
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆46Updated 8 months ago
- [EMNLP 2024 Findings🔥] Official implementation of ": LOOK-M: Look-Once Optimization in KV Cache for Efficient Multimodal Long-Context In…☆92Updated 5 months ago
- [Arxiv 2025] Efficient Reasoning Models: A Survey☆107Updated this week
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆72Updated this week
- [ICML 2024] Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibrati…☆36Updated 9 months ago