Yuheng2000 / Awesome-LoRALinks
Awesome Low-Rank Adaptation
☆39Updated 9 months ago
Alternatives and similar repositories for Awesome-LoRA
Users that are interested in Awesome-LoRA are comparing it to the libraries listed below
Sorting:
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆82Updated 7 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- Awesome-Low-Rank-Adaptation☆102Updated 7 months ago
- Awesome-Efficient-Inference-for-LRMs is a collection of state-of-the-art, novel, exciting, token-efficient methods for Large Reasoning Mo…☆64Updated last week
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆31Updated 6 months ago
- [NeurIPS 2024] "Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?"☆35Updated 4 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆203Updated 6 months ago
- ☆25Updated 9 months ago
- A curated list of Model Merging methods.☆92Updated 8 months ago
- ☆131Updated 3 weeks ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆76Updated last year
- [ICLR 2025] Code&Data for the paper "Super(ficial)-alignment: Strong Models May Deceive Weak Models in Weak-to-Strong Generalization"☆13Updated 11 months ago
- A block pruning framework for LLMs.☆23Updated 2 weeks ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 8 months ago
- What Happened in LLMs Layers when Trained for Fast vs. Slow Thinking: A Gradient Perspective☆64Updated 3 months ago
- [ICLR 2025] Dynamic Mixture of Experts: An Auto-Tuning Approach for Efficient Transformer Models☆97Updated 3 months ago
- Codes for Merging Large Language Models☆31Updated 9 months ago
- ☆46Updated 6 months ago
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆68Updated 3 months ago
- State-of-the-art Parameter-Efficient MoE Fine-tuning Method☆162Updated 9 months ago
- MoCLE (First MLLM with MoE for instruction customization and generalization!) (https://arxiv.org/abs/2312.12379)☆38Updated last year
- Code for ACL 2024 accepted paper titled "SAPT: A Shared Attention Framework for Parameter-Efficient Continual Learning of Large Language …☆35Updated 4 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆44Updated 7 months ago
- Accepted LLM Papers in NeurIPS 2024☆37Updated 7 months ago
- An Efficient LLM Fine-Tuning Factory Optimized for MoE PEFT☆99Updated 2 months ago
- ☆105Updated 2 months ago
- Code for Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities (NeurIPS'24)☆22Updated 5 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 7 months ago
- ☆57Updated this week
- 📜 Paper list on decoding methods for LLMs and LVLMs☆48Updated last month