ArthurLeoM / peft-givensLinks
source code of (quasi-)Givens Orthogonal Fine Tuning integrated to peft lib
☆16Updated 2 months ago
Alternatives and similar repositories for peft-givens
Users that are interested in peft-givens are comparing it to the libraries listed below
Sorting:
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 8 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆59Updated 3 months ago
- Awesome-Low-Rank-Adaptation☆102Updated 7 months ago
- [EMNLP 2023, Main Conference] Sparse Low-rank Adaptation of Pre-trained Language Models☆77Updated last year
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆83Updated 7 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆44Updated 7 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆117Updated 2 months ago
- A curated list of Model Merging methods.☆92Updated 8 months ago
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆45Updated 7 months ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆46Updated 4 months ago
- Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" b…☆32Updated last year
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆81Updated last year
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated 3 months ago
- Data distillation benchmark☆64Updated this week
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- Task Singular Vectors: Reducing Task Interference in Model Merging. Merge models avoiding task interference through separable models.☆14Updated 2 weeks ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆101Updated last year
- ☆105Updated 11 months ago
- ☆146Updated 8 months ago
- [NeurIPS 2024] "Mind the Gap between Prototypes and Images in Cross-domain Finetuning"☆11Updated 6 months ago
- [NeurIPS'24 Oral] HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning☆203Updated 6 months ago
- Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation (ICML'24 Oral)☆14Updated 10 months ago
- A generalized framework for subspace tuning methods in parameter efficient fine-tuning.☆142Updated 4 months ago
- [NeurIPS 2024] AlphaPruning: Using Heavy-Tailed Self Regularization Theory for Improved Layer-wise Pruning of Large Language Models☆23Updated 2 months ago
- Bayesian Low-Rank Adaptation for Large Language Models☆34Updated 11 months ago
- ☆25Updated last year
- ☆54Updated 5 months ago
- The this is the official implementation of "DAPE: Data-Adaptive Positional Encoding for Length Extrapolation"☆38Updated 7 months ago
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆46Updated 5 months ago
- Implementaiton of "DiLM: Distilling Dataset into Language Model for Text-level Dataset Distillation" (accepted by NAACL2024 Findings)".☆21Updated 3 months ago