X-LoRA: Mixture of LoRA Experts
☆267Aug 4, 2024Updated last year
Alternatives and similar repositories for xlora
Users that are interested in xlora are comparing it to the libraries listed below
Sorting:
- ☆65Dec 2, 2024Updated last year
- LoRAMoE: Revolutionizing Mixture of Experts for Maintaining World Knowledge in Language Model Alignment☆401Apr 29, 2024Updated last year
- [SIGIR'24] The official implementation code of MOELoRA.☆188Jul 22, 2024Updated last year
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆203Aug 22, 2024Updated last year
- ☆176Jul 22, 2024Updated last year
- AdaMoLE: Adaptive Mixture of LoRA Experts☆38Oct 11, 2024Updated last year
- Low rank adaptation (LoRA) for Candle.☆169Apr 18, 2025Updated 10 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆133Mar 11, 2025Updated 11 months ago
- ☆274Oct 31, 2023Updated 2 years ago
- [COLM 2024] LoraHub: Efficient Cross-Task Generalization via Dynamic LoRA Composition☆669Jul 22, 2024Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- ☆126Jul 6, 2024Updated last year
- Sampling techniques for Candle.☆19Apr 3, 2024Updated last year
- Official repository of the paper: Can ChatGPT Detect DeepFakes? A Study of Using Multimodal Large Language Models for Media Forensics☆15Mar 22, 2024Updated last year
- The official PyTorch implementation of the paper "MLAE: Masked LoRA Experts for Visual Parameter-Efficient Fine-Tuning"☆28Dec 3, 2024Updated last year
- A 7B parameter model for mathematical reasoning☆42Feb 17, 2025Updated last year
- ACL 2024: LoRA-Flow Dynamic LoRA Fusion for Large Language Models in Generative Tasks☆23Oct 9, 2024Updated last year
- Fast, flexible LLM inference☆6,623Updated this week
- Code for PHATGOOSE introduced in "Learning to Route Among Specialized Experts for Zero-Shot Generalization"☆91Feb 27, 2024Updated 2 years ago
- Load multiple LoRA modules simultaneously and automatically switch the appropriate combination of LoRA modules to generate the best answe…☆159Feb 9, 2024Updated 2 years ago
- ☆320Sep 18, 2024Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆99Oct 28, 2024Updated last year
- The implement of FedCyBGD☆11Jul 19, 2024Updated last year
- ☆11Jun 5, 2024Updated last year
- ☆11May 9, 2023Updated 2 years ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,899Jan 21, 2024Updated 2 years ago
- Official code for the paper "Examining Post-Training Quantization for Mixture-of-Experts: A Benchmark"☆29Jun 30, 2025Updated 8 months ago
- Tools for merging pretrained large language models.☆6,826Updated this week
- Make-An-Audio-3: Transforming Text/Video into Audio via Flow-based Large Diffusion Transformers☆119May 19, 2025Updated 9 months ago
- Pytorch implementation of HyperLLaVA: Dynamic Visual and Language Expert Tuning for Multimodal Large Language Models☆28Mar 22, 2024Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆47Oct 10, 2024Updated last year
- This is the official code for the paper "Reconstruct before Query: Continual Missing Modality Learning with Decomposed Prompt Collaborati…☆12Aug 13, 2024Updated last year
- ☆13Jun 3, 2024Updated last year
- ☆11Sep 19, 2025Updated 5 months ago
- Code for the paper "Pretrained Models for Multilingual Federated Learning" at NAACL 2022☆11Aug 9, 2022Updated 3 years ago
- The code and datasets of our ACM MM 2024 paper "Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed …☆11Sep 27, 2024Updated last year
- ☆14Apr 29, 2025Updated 10 months ago
- MTEB: Massive Text Embedding Benchmark☆11Jan 29, 2024Updated 2 years ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆30Jun 7, 2024Updated last year