☆125Jul 6, 2024Updated last year
Alternatives and similar repositories for MoSLoRA
Users that are interested in MoSLoRA are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆44Jul 22, 2024Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆403Apr 29, 2024Updated last year
- [ACL 2024 Findings] Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning☆13Sep 2, 2024Updated last year
- Awesome Low-Rank Adaptation☆59Aug 6, 2025Updated 8 months ago
- [SIGIR'24] The official implementation code of MOELoRA.☆192Jul 22, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [ICLR 2025] Official implementation of paper "Dynamic Low-Rank Sparse Adaptation for Large Language Models".☆24Mar 16, 2025Updated last year
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆181Jan 29, 2026Updated 2 months ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆35Feb 19, 2025Updated last year
- Source code of paper: A Stronger Mixture of Low-Rank Experts for Fine-Tuning Foundation Models. (ICML 2025)☆39Apr 2, 2025Updated last year
- [ICLR 2025] RaSA: Rank-Sharing Low-Rank Adaptation☆10May 19, 2025Updated 11 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆148Apr 8, 2025Updated last year
- [EMNLP 2024] Quantize LLM to extremely low-bit, and finetune the quantized LLMs☆15Jul 18, 2024Updated last year
- ☆19Jan 3, 2025Updated last year
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆85Mar 5, 2024Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Awesome-Low-Rank-Adaptation☆127Oct 13, 2024Updated last year
- ☆177Jul 22, 2024Updated last year
- ☆221Nov 25, 2025Updated 4 months ago
- ☆153Sep 9, 2024Updated last year
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆41Oct 11, 2024Updated last year
- ☆22Nov 19, 2024Updated last year
- Code and data for QueryAgent(ACL 2024)☆20Dec 19, 2024Updated last year
- ☆18Nov 10, 2024Updated last year
- ☆276Oct 31, 2023Updated 2 years ago
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Oct 10, 2024Updated last year
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆136Mar 11, 2025Updated last year
- [EMNLP 2024] SURf: Teaching Large Vision-Language Models to Selectively Utilize Retrieved Information☆12Oct 11, 2024Updated last year
- Official Implementation of Attentive Mask CLIP (ICCV2023, https://arxiv.org/abs/2212.08653)☆36May 29, 2024Updated last year
- X-LoRA: Mixture of LoRA Experts☆270Aug 4, 2024Updated last year
- ICLR 2025☆31May 21, 2025Updated 10 months ago
- [CVPR 2025] VISCO: Benchmarking Fine-Grained Critique and Correction Towards Self-Improvement in Visual Reasoning☆13Jun 7, 2025Updated 10 months ago
- MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning☆361Aug 7, 2024Updated last year
- One Initialization to Rule them All: Fine-tuning via Explained Variance Adaptation☆50Oct 20, 2025Updated 5 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆16Feb 28, 2023Updated 3 years ago
- ☆15Mar 20, 2025Updated last year
- ☆20Oct 13, 2024Updated last year
- ☆30Sep 28, 2023Updated 2 years ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- ☆116Jan 2, 2025Updated last year
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,231Mar 10, 2024Updated 2 years ago