☆179Jul 22, 2024Updated last year
Alternatives and similar repositories for MoLA
Users that are interested in MoLA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆404Apr 29, 2024Updated 2 years ago
- [SIGIR'24] The official implementation code of MOELoRA.☆192Jul 22, 2024Updated last year
- ☆277Oct 31, 2023Updated 2 years ago
- ☆19Nov 10, 2024Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆204Aug 22, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Adapt an LLM model to a Mixture-of-Experts model using Parameter Efficient finetuning (LoRA), injecting the LoRAs in the FFN.☆84Oct 21, 2025Updated 6 months ago
- This repository has transferred to https://github.com/TUDB-Labs/MoE-PEFT☆22Aug 16, 2024Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆139Mar 11, 2025Updated last year
- X-LoRA: Mixture of LoRA Experts☆270Aug 4, 2024Updated last year
- Codebase for ACL 2023 paper "Mixture-of-Domain-Adapters: Decoupling and Injecting Domain Knowledge to Pre-trained Language Models' Memori…☆52Oct 8, 2023Updated 2 years ago
- ☆126Jul 6, 2024Updated last year
- Awesome-Low-Rank-Adaptation☆128Oct 13, 2024Updated last year
- [AAAI 2024] MELO: Enhancing Model Editing with Neuron-indexed Dynamic LoRA☆28Apr 9, 2024Updated 2 years ago
- ☆26Jan 20, 2025Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- ☆12Jul 18, 2023Updated 2 years ago
- ☆38Jan 16, 2025Updated last year
- ☆16Nov 12, 2024Updated last year
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆512Aug 26, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Sep 20, 2024Updated last year
- ☆233Jun 24, 2024Updated last year
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆46Jul 1, 2025Updated 10 months ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,233Mar 10, 2024Updated 2 years ago
- The collections of MOE (Mixture Of Expert) papers, code and tools, etc.☆12Mar 15, 2024Updated 2 years ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆170Jun 13, 2024Updated last year
- PrefixKV: Adaptive Prefix KV Cache is What Vision Instruction-Following Models Need for Efficient Generation [NeurIPS 2025]☆18Oct 11, 2025Updated 6 months ago
- ☆129Jan 22, 2024Updated 2 years ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆86Mar 5, 2024Updated 2 years ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆84Dec 21, 2024Updated last year
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆21Jan 16, 2025Updated last year
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆123Apr 28, 2024Updated 2 years ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- [ACL 2024 Findings] Learning Fine-Grained Grounded Citations for Attributed Large Language Models☆19Oct 24, 2024Updated last year
- Implementation of DoRA☆310Jun 7, 2024Updated last year
- TRACE: A Comprehensive Benchmark for Continual Learning in Large Language Models☆95Jan 24, 2024Updated 2 years ago
- Skywork-MoE: A Deep Dive into Training Techniques for Mixture-of-Experts Language Models☆139Jun 12, 2024Updated last year
- ☆201Jul 13, 2024Updated last year
- [Findings of EMNLP 2024] AdaMoE: Token-Adaptive Routing with Null Experts for Mixture-of-Experts Language Models☆20Oct 2, 2024Updated last year
- [ACL2024 Findings]DMoERM: Recipes of Mixture-of-Experts for Effective Reward Modeling☆17Jun 6, 2024Updated last year