liuqidong07 / MOELoRA-peftLinks
[SIGIR'24] The official implementation code of MOELoRA.
☆183Updated last year
Alternatives and similar repositories for MOELoRA-peft
Users that are interested in MOELoRA-peft are comparing it to the libraries listed below
Sorting:
- ☆161Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆131Updated 11 months ago
- ☆186Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆380Updated last year
- this is an implementation for the paper Improve Mathematical Reasoning in Language Models by Automated Process Supervision from google de…☆41Updated 3 months ago
- ☆83Updated last year
- ☆28Updated last year
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆63Updated 9 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆165Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆36Updated 9 months ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆186Updated 3 months ago
- Model merging is a highly efficient approach for long-to-short reasoning.☆89Updated last week
- [ICLR'25] DataGen: Unified Synthetic Dataset Generation via Large Language Models☆64Updated 7 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆32Updated 8 months ago
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆54Updated 5 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆125Updated 7 months ago
- ☆157Updated last week
- The repo for In-context Autoencoder☆145Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆180Updated 3 months ago
- ☆47Updated 8 months ago
- ☆25Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆192Updated last year
- A method of ensemble learning for heterogeneous large language models.☆62Updated last year
- [NeurIPS 2024] The official implementation of paper: Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs.☆130Updated 7 months ago
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆61Updated last year
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆55Updated 5 months ago
- Inference Code for Paper "Harder Tasks Need More Experts: Dynamic Routing in MoE Models"☆63Updated last year
- Counting-Stars (★)☆83Updated 4 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆31Updated 5 months ago