roymiles / VeLoRALinks
[NeurIPS 2024] VeLoRA : Memory Efficient Training using Rank-1 Sub-Token Projections
☆21Updated last year
Alternatives and similar repositories for VeLoRA
Users that are interested in VeLoRA are comparing it to the libraries listed below
Sorting:
- Model Merging with SVD to Tie the KnOTS [ICLR 2025]☆80Updated 8 months ago
- [NeurIPS'24] Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization☆37Updated last year
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- This repository is the implementation of the paper Training Free Pretrained Model Merging (CVPR2024).☆32Updated last year
- SLTrain: a sparse plus low-rank approach for parameter and memory efficient pretraining (NeurIPS 2024)☆38Updated last year
- This is a PyTorch implementation of the paperViP A Differentially Private Foundation Model for Computer Vision☆36Updated 2 years ago
- Code for T-MARS data filtering☆35Updated 2 years ago
- Code and benchmark for the paper: "A Practitioner's Guide to Continual Multimodal Pretraining" [NeurIPS'24]☆60Updated last year
- Towards Meta-Pruning via Optimal Transport, ICLR 2024 (Spotlight)☆16Updated last year
- Metrics for "Beyond neural scaling laws: beating power law scaling via data pruning " (NeurIPS 2022 Outstanding Paper Award)☆57Updated 2 years ago
- ☆21Updated 2 years ago
- ☆34Updated 10 months ago
- Switch EMA: A Free Lunch for Better Flatness and Sharpness☆28Updated last year
- Implementation for <Orthogonal Over-Parameterized Training> in CVPR'21.☆22Updated 4 years ago
- [NeurIPS'24] Official PyTorch implementation for paper "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling"☆26Updated 9 months ago
- ☆38Updated last year
- AdaSplash: Adaptive Sparse Flash Attention (aka Flash Entmax Attention)☆30Updated 2 months ago
- official code repo for paper "Merging Models on the Fly Without Retraining: A Sequential Approach to Scalable Continual Model Merging"☆22Updated 2 months ago
- Official code for "pi-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation", ICML 2023.☆33Updated 2 years ago
- ☆25Updated 2 years ago
- Official implementation of the paper: "A deeper look at depth pruning of LLMs"☆15Updated last year
- [ICLR 2025] Official Pytorch Implementation of "Mix-LN: Unleashing the Power of Deeper Layers by Combining Pre-LN and Post-LN" by Pengxia…☆27Updated 4 months ago
- Original code base for On Pretraining Data Diversity for Self-Supervised Learning☆14Updated 11 months ago
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- [NeurIPS '25] Multi-Token Prediction Needs Registers☆25Updated last week
- Data distillation benchmark☆71Updated 6 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆29Updated last year
- Official Code for NeurIPS 2022 Paper: How Mask Matters: Towards Theoretical Understandings of Masked Autoencoders☆68Updated 2 years ago
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆22Updated last year
- ☆40Updated last year