☆126Jul 6, 2024Updated last year
Alternatives and similar repositories for MoSLoRA
Users that are interested in MoSLoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆10Apr 16, 2024Updated 2 years ago
- ☆44Jul 22, 2024Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆404Apr 29, 2024Updated 2 years ago
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- Awesome Low-Rank Adaptation☆60Apr 20, 2026Updated 2 weeks ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- [SIGIR'24] The official implementation code of MOELoRA.☆192Jul 22, 2024Updated last year
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆238Dec 3, 2024Updated last year
- ☆36Aug 23, 2023Updated 2 years ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆182Jan 29, 2026Updated 3 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆35Feb 19, 2025Updated last year
- [ICLR'25] Code for KaSA, an official implementation of "KaSA: Knowledge-Aware Singular-Value Adaptation of Large Language Models"☆22Jan 16, 2025Updated last year
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆38Apr 2, 2025Updated last year
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 11 months ago
- Managed Database hosting by DigitalOcean • AdPostgreSQL, MySQL, MongoDB, Kafka, Valkey, and OpenSearch available. Automatically scale up storage and focus on building your apps.
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆148Apr 8, 2025Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- ☆19Jan 3, 2025Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆86Mar 5, 2024Updated 2 years ago
- Awesome-Low-Rank-Adaptation☆128Oct 13, 2024Updated last year
- ☆179Jul 22, 2024Updated last year
- ☆221Nov 25, 2025Updated 5 months ago
- ☆153Sep 9, 2024Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Code-Style In-Context Learning for Knowledge-Based Question Answering☆14Mar 3, 2024Updated 2 years ago
- Code and data for QueryAgent(ACL 2024)☆20Dec 19, 2024Updated last year
- ☆277Oct 31, 2023Updated 2 years ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆49Oct 10, 2024Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆139Mar 11, 2025Updated last year
- [ICASSP 2025 Oral] The official implementation of paper "TextureDiffusion: Target Prompt Disentangled Editing for Various Texture Transfe…☆16Mar 13, 2025Updated last year
- [EMNLP 2024] SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information☆11Oct 11, 2024Updated last year
- X-LoRA: Mixture of LoRA Experts☆270Aug 4, 2024Updated last year
- [CVPR 2025] VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning☆13Jun 7, 2025Updated 11 months ago
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ICLR 2025☆31May 21, 2025Updated 11 months ago
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆52Oct 20, 2025Updated 6 months ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆362Aug 7, 2024Updated last year
- ☆15Mar 20, 2025Updated last year
- ☆21Oct 13, 2024Updated last year
- ☆30Sep 28, 2023Updated 2 years ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆204Aug 22, 2024Updated last year