TUDB-Labs / MixLoRA
State-of-the-art Parameter-Efficient MoE Fine-tuning Method
☆152Updated 7 months ago
Alternatives and similar repositories for MixLoRA:
Users that are interested in MixLoRA are comparing it to the libraries listed below
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆85Updated 3 weeks ago
- ☆131Updated 8 months ago
- ☆99Updated 8 months ago
- TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆103Updated 3 weeks ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆181Updated 4 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆155Updated 8 months ago
- ☆186Updated 5 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆82Updated last month
- ☆142Updated 6 months ago
- ☆170Updated 8 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆314Updated 11 months ago
- ☆171Updated last month
- L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning☆162Updated 2 weeks ago
- Paper List of Inference/Test Time Scaling/Computing☆131Updated last week
- This repository contains a regularly updated paper list for LLMs-reasoning-in-latent-space.☆67Updated last week
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆115Updated 5 months ago
- ☆85Updated 3 weeks ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆132Updated last month
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆56Updated last month
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆72Updated 5 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆108Updated 3 weeks ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆77Updated 9 months ago
- PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models(NeurIPS 2024 Spotlight)☆337Updated 2 months ago
- ☆82Updated 3 months ago
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆41Updated 3 months ago
- [ICLR 2025] SuperCorrect: Advancing Small LLM Reasoning with Thought Template Distillation and Self-Correction☆66Updated last week
- A Survey on Efficient Reasoning for LLMs☆204Updated last week
- Paper list for Efficient Reasoning.☆331Updated this week
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆53Updated last month
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆107Updated last week