dongzelian / SSF
[NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".
☆173Updated last year
Related projects ⓘ
Alternatives and complementary repositories for SSF
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆171Updated last year
- [TPAMI] Searching prompt modules for parameter-efficient transfer learning.☆222Updated 11 months ago
- [ICCV2023] - CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation☆29Updated last month
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆166Updated 8 months ago
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆64Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆99Updated last year
- Dataset pruning for ImageNet and LAION-2B.☆69Updated 4 months ago
- PyTorch implementation of the paper "MILAN: Masked Image Pretraining on Language Assisted Representation" https://arxiv.org/pdf/2208.0604…☆79Updated 2 years ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆146Updated 11 months ago
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆145Updated 2 years ago
- ☆50Updated 2 years ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆151Updated last year
- [CVPR2024] Efficient Dataset Distillation via Minimax Diffusion☆80Updated 7 months ago
- ☆100Updated 8 months ago
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆75Updated 4 months ago
- Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"