OscarXZQ / weight-selectionLinks
☆191Updated last year
Alternatives and similar repositories for weight-selection
Users that are interested in weight-selection are comparing it to the libraries listed below
Sorting:
- Official code for "TOAST: Transfer Learning via Attention Steering"☆188Updated 2 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆251Updated 5 months ago
- ☆56Updated 2 years ago
- Code for experiments for "ConvNet vs Transformer, Supervised vs CLIP: Beyond ImageNet Accuracy"☆102Updated last year
- Implementation of Soft MoE, proposed by Brain's Vision team, in Pytorch☆343Updated 9 months ago
- PyTorch Implementation of Object Recognition as Next Token Prediction [CVPR'24 Highlight]☆182Updated 9 months ago
- Code for NOLA, an implementation of "nola: Compressing LoRA using Linear Combination of Random Basis"☆57Updated last year
- A framework for merging models solving different tasks with different initializations into one multi-task model without any additional tr …☆309Updated 2 years ago
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers☆105Updated last year
- PyTorch implementation of Soft MoE by Google Brain in "From Sparse to Soft Mixtures of Experts" (https://arxiv.org/pdf/2308.00951.pdf)☆82Updated 2 years ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"☆106Updated 2 years ago
- Official implementation of "DoRA: Weight-Decomposed Low-Rank Adaptation"☆124Updated last year
- Matryoshka Multimodal Models☆121Updated last year
- Implementation of the paper: "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆113Updated this week
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Model Stock: All we need is just a few fine-tuned models☆128Updated 5 months ago
- A simple minimal implementation of Reversible Vision Transformers☆126Updated last year
- ☆107Updated last year
- Are gradient information useful for pruning of LLMs?☆47Updated 5 months ago
- [CVPR 2025] Official PyTorch implementation of MaskSub "Masking meets Supervision: A Strong Learning Alliance"☆45Updated 10 months ago
- A repository for DenseSSMs☆88Updated last year
- Awesome list of papers that extend Mamba to various applications.☆138Updated 7 months ago
- 1.5−3.0× lossless training or pre-training speedup. An off-the-shelf, easy-to-implement algorithm for the efficient training of foundatio…☆226Updated last year
- Language Quantized AutoEncoders☆111Updated 2 years ago
- Implementation of MaMMUT, a simple vision-encoder text-decoder architecture for multimodal tasks from Google, in Pytorch☆103Updated 2 years ago
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 3 years ago
- Code and models for the paper "The effectiveness of MAE pre-pretraining for billion-scale pretraining" https://arxiv.org/abs/2303.13496☆92Updated 9 months ago
- When do we not need larger vision models?☆412Updated 11 months ago
- Distributed Optimization Infra for learning CLIP models☆27Updated last year
- PyTorch implementation of "From Sparse to Soft Mixtures of Experts"☆68Updated 2 years ago