MohammadrezaBanaei / LoRA-XSLinks
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters
☆35Updated 2 months ago
Alternatives and similar repositories for LoRA-XS
Users that are interested in LoRA-XS are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆76Updated last year
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆44Updated 7 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 8 months ago
- ☆28Updated 3 months ago
- ☆54Updated 5 months ago
- A curated list of Model Merging methods.☆92Updated 8 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆82Updated 7 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 7 months ago
- ☆12Updated 4 months ago
- Awesome-Low-Rank-Adaptation☆102Updated 7 months ago
- ☆67Updated 3 years ago
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆56Updated 2 months ago
- Code accompanying the paper "Massive Activations in Large Language Models"☆162Updated last year
- Codes for Merging Large Language Models☆31Updated 9 months ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆14Updated 10 months ago
- ☆25Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆31Updated 7 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆117Updated last month
- source code of (quasi-)Givens Orthogonal Fine Tuning integrated to peft lib☆16Updated 2 months ago
- Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" b…☆32Updated last year
- Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"☆18Updated last year
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆56Updated 5 months ago
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆85Updated 7 months ago
- Preprint: Asymmetry in Low-Rank Adapters of Foundation Models☆35Updated last year
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated last year
- Code for "Surgical Fine-Tuning Improves Adaptation to Distribution Shifts" published at ICLR 2023☆28Updated last year
- Bayesian Low-Rank Adaptation for Large Language Models☆34Updated 11 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year