☆277Oct 31, 2023Updated 2 years ago
Alternatives and similar repositories for parameter-efficient-moe
Users that are interested in parameter-efficient-moe are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [SIGIR'24] The official implementation code of MOELoRA.☆192Jul 22, 2024Updated last year
- ☆179Jul 22, 2024Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆404Apr 29, 2024Updated 2 years ago
- ⛷️ LLaMA-MoE: Building Mixture-of-Experts from LLaMA with Continual Pre-training (EMNLP 2024)☆1,001Dec 6, 2024Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Sep 20, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Mixture of Expert (MoE) techniques for enhancing LLM performance through expert-driven prompt mapping and adapter combinations.☆12Feb 11, 2024Updated 2 years ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,677Mar 8, 2024Updated 2 years ago
- ModuleFormer is a MoE-based architecture that includes two different types of experts: stick-breaking attention heads and feedforward exp…☆226Sep 18, 2025Updated 7 months ago
- ☆415Nov 2, 2023Updated 2 years ago
- ☆30Sep 28, 2023Updated 2 years ago
- [ICLR‘24 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆106Jun 20, 2025Updated 10 months ago
- The collections of MOE (Mixture Of Expert) papers, code and tools, etc.☆12Mar 15, 2024Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆669Jul 22, 2024Updated last year
- A collection of AWESOME things about mixture-of-experts☆1,275Dec 8, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆204Aug 22, 2024Updated last year
- X-LoRA: Mixture of LoRA Experts☆270Aug 4, 2024Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆114May 2, 2022Updated 4 years ago
- Repository for Skill Set Optimization☆14Jul 26, 2024Updated last year
- [TMLR 2024] Official implementation of "Sight Beyond Text: Multi-Modal Training Enhances LLMs in Truthfulness and Ethics"☆20Sep 15, 2023Updated 2 years ago
- Official repository of NEFTune: Noisy Embeddings Improves Instruction Finetuning☆412May 17, 2024Updated last year
- batched loras☆351Sep 6, 2023Updated 2 years ago
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,692Aug 14, 2024Updated last year
- LongQLoRA: Extent Context Length of LLMs Efficiently☆168Nov 12, 2023Updated 2 years ago
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- ☆126Jul 6, 2024Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,155May 8, 2024Updated last year
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,909Jan 21, 2024Updated 2 years ago
- DeepSeekMoE: Towards Ultimate Expert Specialization in Mixture-of-Experts Language Models☆1,921Jan 16, 2024Updated 2 years ago
- LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transform…☆1,465Nov 7, 2023Updated 2 years ago
- Codebase for Merging Language Models (ICML 2024)☆865May 5, 2024Updated last year
- Implementation of DoRA☆310Jun 7, 2024Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆239Dec 3, 2024Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆474Apr 21, 2024Updated 2 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Official PyTorch Implementation of EMoE: Unlocking Emergent Modularity in Large Language Models [main conference @ NAACL2024]☆39May 28, 2024Updated last year
- Code for the paper "QMoE: Practical Sub-1-Bit Compression of Trillion-Parameter Models".☆280Nov 3, 2023Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆56Feb 28, 2023Updated 3 years ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆594Dec 9, 2024Updated last year
- ☆204Dec 5, 2024Updated last year
- The official repo for "LLoCo: Learning Long Contexts Offline"☆117Jun 15, 2024Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,233Mar 10, 2024Updated 2 years ago