maidacundo / MoE-LoRALinks
Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.
☆76Updated 2 months ago
Alternatives and similar repositories for MoE-LoRA
Users that are interested in MoE-LoRA are comparing it to the libraries listed below
Sorting:
- [SIGIR'24] The official implementation code of MOELoRA.☆186Updated last year
- ☆173Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆139Updated last year
- CPPO: Accelerating the Training of Group Relative Policy Optimization-Based Reasoning Models (NeurIPS 2025)☆169Updated 2 months ago
- Scaling Preference Data Curation via Human-AI Synergy☆135Updated 6 months ago
- ☆87Updated 2 years ago
- this is an implementation for the paper Improve Mathematical Reasoning in Language Models by Automated Process Supervision from google de…☆44Updated 5 months ago
- ☆39Updated 5 months ago
- ☆192Updated last year
- ☆57Updated 5 months ago
- [ICML2025] The official implementation of "C-3PO: Compact Plug-and-Play Proxy Optimization to Achieve Human-like Retrieval-Augmented Gene…☆40Updated 8 months ago
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆391Updated last year
- ☆124Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆184Updated 6 months ago
- ☆176Updated last month
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆67Updated last year
- ☆111Updated 6 months ago
- OpenRFT: Adapting Reasoning Foundation Model for Domain-specific Tasks with Reinforcement Fine-Tuning☆154Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated last year
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆32Updated 7 months ago
- ☆216Updated last month
- 🚀 LLaMA-MoE v2: Exploring Sparsity of LLaMA from Perspective of Mixture-of-Experts with Post-Training☆91Updated last year
- xVerify: Efficient Answer Verifier for Reasoning Model Evaluations☆143Updated last month
- Pre-trained, Scalable, High-performance Reward Models via Policy Discriminative Learning.☆163Updated 3 months ago
- ☆125Updated last year
- [ACL'25] We propose a novel fine-tuning method, Separate Memory and Reasoning, which combines prompt tuning with LoRA.☆80Updated 2 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆65Updated last year
- ☆47Updated 10 months ago
- Test-time preferenece optimization (ICML 2025).☆177Updated 7 months ago
- [ACL 2025] We introduce ScaleQuest, a scalable, novel and cost-effective data synthesis method to unleash the reasoning capability of LLM…☆68Updated last year