ArthurLeoM / peft-givens
source code of (quasi-)Givens Orthogonal Fine Tuning integrated to peft lib
☆16Updated last month
Alternatives and similar repositories for peft-givens:
Users that are interested in peft-givens are comparing it to the libraries listed below
- Code for paper "Parameter Efficient Multi-task Model Fusion with Partial Linearization"☆21Updated 7 months ago
- Official Pytorch Implementation of "OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for Memory-Efficient LLM Fine-tuning" b…☆31Updated 10 months ago
- Awesome-Low-Rank-Adaptation☆93Updated 6 months ago
- Official repository of "Localizing Task Information for Improved Model Merging and Compression" [ICML 2024]☆43Updated 6 months ago
- [NeurIPS 2024 Spotlight] EMR-Merging: Tuning-Free High-Performance Model Merging☆57Updated last month
- ☆49Updated 4 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year
- The official implementation of "2024NeurIPS Dynamic Tuning Towards Parameter and Inference Efficiency for ViT Adaptation"☆44Updated 3 months ago
- ICLR 2024, Towards Lossless Dataset Distillation via Difficulty-Aligned Trajectory Matching☆102Updated 11 months ago
- [CVPR2024 highlight] Generalized Large-Scale Data Condensation via Various Backbone and Statistical Matching (G-VBSM)☆28Updated 6 months ago
- AdaMerging: Adaptive Model Merging for Multi-Task Learning. ICLR, 2024.☆77Updated 5 months ago
- CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)☆46Updated 3 months ago
- Data distillation benchmark☆58Updated last week
- LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning☆31Updated last year
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆112Updated 2 weeks ago
- [EMNLP 2023 Main] Sparse Low-rank Adaptation of Pre-trained Language Models☆75Updated last year
- ☆10Updated 2 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- ☆29Updated 2 years ago
- ☆56Updated 3 months ago
- Elucidated Dataset Condensation (NeurIPS 2024)☆21Updated 6 months ago
- ☆16Updated 5 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆32Updated last year
- This is the official repo for Debiasing Large Visual Language Models, including a Post-Hoc debias method and Visual Debias Decoding strat…☆78Updated 2 months ago
- [ICML'24 Oral] APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference☆39Updated 10 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆30Updated 5 months ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆82Updated last year
- Codes for Merging Large Language Models☆29Updated 8 months ago
- ☆27Updated last year
- Representation Surgery for Multi-Task Model Merging. ICML, 2024.☆44Updated 6 months ago