roymiles / VeLoRALinks
[NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections
☆20Updated 7 months ago
Alternatives and similar repositories for VeLoRA
Users that are interested in VeLoRA are comparing it to the libraries listed below
Sorting:
- Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆16Updated 6 months ago
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆56Updated 2 months ago
- ☆12Updated 4 months ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆56Updated 2 years ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆55Updated 9 months ago
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆32Updated 8 months ago
- ☆20Updated last month
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆56Updated 5 months ago
- ☆28Updated 3 months ago
- ☆21Updated 2 years ago
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆28Updated last year
- [ICML24] Official Implementation of "ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections"☆14Updated last year
- Official implementation of "Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization"☆77Updated last year
- ☆42Updated 6 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆40Updated 2 years ago
- ☆13Updated 7 months ago
- Official Pytorch Implementation of Our Paper Accepted at ICLR 2024-- Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLM…☆47Updated last year
- ☆54Updated 5 months ago
- Elucidated Dataset Condensation (NeurIPS 2024)☆22Updated 8 months ago
- BESA is a differentiable weight pruning technique for large language models.☆16Updated last year
- Implementation and dataset for paper "Can MLLMs Perform Text-to-Image In-Context Learning?"☆38Updated 2 months ago
- Code for "Are “Hierarchical” Visual Representations Hierarchical?" in NeurIPS Workshop for Symmetry and Geometry in Neural Representation…☆20Updated last year
- ☆38Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated last year
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆36Updated 10 months ago
- ☆24Updated 2 months ago
- Switch EMA: A Free Lunch for Better Flatness and Sharpness☆26Updated last year
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆20Updated 6 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated 2 months ago