roymiles / VeLoRALinks
[NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections
☆21Updated 9 months ago
Alternatives and similar repositories for VeLoRA
Users that are interested in VeLoRA are comparing it to the libraries listed below
Sorting:
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆59Updated 3 months ago
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆33Updated 9 months ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆56Updated 10 months ago
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆30Updated last year
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆57Updated 7 months ago
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆56Updated 2 years ago
- Code for T-MARS data filtering☆35Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- ☆14Updated 5 months ago
- ☆38Updated last year
- ☆30Updated 5 months ago
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆32Updated 8 months ago
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆15Updated this week
- ☆22Updated last month
- Code for ECCV 2022 paper “Learning with Recoverable Forgetting”☆21Updated 2 years ago
- Official Code for NeurIPS 2022 Paper: How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders☆67Updated last year
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆20Updated 7 months ago
- ☆42Updated last year
- Switch EMA: A Free Lunch for Better Flatness and Sharpness☆26Updated last year
- ☆51Updated last year
- Data distillation benchmark☆66Updated last month
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆25Updated 6 months ago
- The code of the paper "Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation" (CVPR2023)☆39Updated 2 years ago
- ☆21Updated 2 years ago
- Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆16Updated 7 months ago
- LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters☆35Updated 4 months ago
- Official code for our paper, "LoRA-Pro: Are Low-Rank Adapters Properly Optimized? "☆126Updated 3 months ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆104Updated last year
- [ICLR 2025] When Attention Sink Emerges in Language Models: An Empirical View (Spotlight)☆99Updated last week