ZJU-LLMs / Awesome-LoRAs
☆53Updated 2 months ago
Related projects ⓘ
Alternatives and complementary repositories for Awesome-LoRAs
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆67Updated this week
- Awesome-Low-Rank-Adaptation☆33Updated 3 weeks ago
- ☆73Updated 4 months ago
- A curated list of Model Merging methods.☆82Updated last month
- ☆141Updated 3 weeks ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆94Updated 5 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆99Updated last month
- The source code of the EMNLP 2023 main conference paper: Sparse Low-rank Adaptation of Pre-trained Language Models.☆69Updated 8 months ago
- ☆129Updated 2 months ago
- ☆115Updated 3 months ago
- Survey on Data-centric Large Language Models☆63Updated 4 months ago
- ☆36Updated 2 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆29Updated 2 weeks ago
- [ICML 2024] Official code for the paper "Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark ".☆73Updated 4 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆53Updated 3 months ago
- ☆21Updated 2 months ago
- [Preprint] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆49Updated 2 months ago
- [NeurIPS2024] Twin-Merging: Dynamic Integration of Modular Expertise in Model Merging☆30Updated 3 weeks ago
- code for ACL24 "MELoRA: Mini-Ensemble Low-Rank Adapter for Parameter-Efficient Fine-Tuning"☆14Updated 5 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆28Updated 5 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆69Updated 2 weeks ago
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆78Updated 7 months ago
- [CVPR 2024] On the Diversity and Realism of Distilled Dataset: An Efficient Dataset Distillation Paradigm☆52Updated 6 months ago
- CLIP-MoE: Mixture of Experts for CLIP☆17Updated 3 weeks ago
- [NeurIPS2023] Parameter-efficient Tuning of Large-scale Multimodal Foundation Model☆82Updated 11 months ago
- (NeurIPS 2023 spotlight) Large-scale Dataset Distillation/Condensation, 50 IPC (Images Per Class) achieves the highest 60.8% on original …☆119Updated this week
- A Survey on Benchmarks of Multimodal Large Language Models☆59Updated 3 weeks ago
- Instruction Tuning in Continual Learning paradigm☆24Updated 4 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆89Updated 2 months ago