maidacundo / MoE-LoRALinks
Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.
☆57Updated 11 months ago
Alternatives and similar repositories for MoE-LoRA
Users that are interested in MoE-LoRA are comparing it to the libraries listed below
Sorting:
- [SIGIR'24] The official implementation code of MOELoRA.☆180Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆130Updated 10 months ago
- ☆152Updated last year
- ☆152Updated 3 months ago
- [ACL 2025] An official pytorch implement of the paper: Condor: Enhance LLM Alignment with Knowledge-Driven Data Synthesis and Refinement☆35Updated 3 months ago
- ☆83Updated last year
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆76Updated this week
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆30Updated 4 months ago
- this is an implementation for the paper Improve Mathematical Reasoning in Language Models by Automated Process Supervision from google de…☆39Updated 2 months ago
- ☆39Updated 2 months ago
- [EMNLP'25] Code for paper "MT-R1-Zero: Advancing LLM-based Machine Translation via R1-Zero-like Reinforcement Learning"☆57Updated 5 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆63Updated last year
- Scaling Preference Data Curation via Human-AI Synergy☆107Updated 2 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆149Updated 8 months ago
- The demo, code and data of FollowRAG☆74Updated 2 months ago
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆175Updated 2 months ago
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆130Updated 5 months ago
- ☆114Updated last year
- Self-Evolved Diverse Data Sampling for Efficient Instruction Tuning☆85Updated last year
- ☆184Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆375Updated last year
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆62Updated 8 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆32Updated 7 months ago
- A highly capable 2.4B lightweight LLM using only 1T pre-training data with all details.☆214Updated last month
- LongQLoRA: Extent Context Length of LLMs Efficiently☆166Updated last year
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆150Updated this week
- Code for Math-LLaVA: Bootstrapping Mathematical Reasoning for Multimodal Large Language Models☆89Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- Test-time preferenece optimization (ICML 2025).☆165Updated 4 months ago
- ☆116Updated last year