KyanChen / MakeMultiHeadNaiveLinks
Use naive MultiheadAttention implement to replace nn.MultiheadAttention in pytorch
☆37Updated 6 months ago
Alternatives and similar repositories for MakeMultiHeadNaive
Users that are interested in MakeMultiHeadNaive are comparing it to the libraries listed below
Sorting:
- ☆170Updated last year
- PyTorch Reimplementation of LoRA (featuring with supporting nn.MultiheadAttention in OpenCLIP)☆67Updated 2 months ago
- A collection of parameter-efficient transfer learning papers focusing on computer vision and multimodal domains.☆404Updated 11 months ago
- [ICCV'23 Main Track, WECIA'23 Oral] Official repository of paper titled "Self-regulating Prompts: Foundational Model Adaptation without F…☆273Updated last year
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆111Updated 8 months ago
- Collection of Composed Image Retrieval (CIR) papers.☆253Updated last week
- Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))☆193Updated 2 years ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆169Updated last year
- ☆348Updated last year
- [ICML 2024] "Visual-Text Cross Alignment: Refining the Similarity Score in Vision-Language Models"☆55Updated 11 months ago
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆27Updated 8 months ago
- [ICCV 2023] Prompt-aligned Gradient for Prompt Tuning☆165Updated 2 years ago
- [NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".☆185Updated last year
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆284Updated last year
- official implementation of "Interpreting CLIP's Image Representation via Text-Based Decomposition"☆221Updated 2 months ago
- Awesome Vision-Language Pretraining Papers☆34Updated 7 months ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆103Updated 2 years ago
- ☆192Updated 2 years ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆285Updated 2 years ago
- Code for paper "Boosting Continual Learning of Vision-Language Models via Mixture-of-Experts Adapters" CVPR2024☆233Updated 9 months ago
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆192Updated 2 years ago
- Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models☆96Updated last year
- [CVPR-2024] Official implementations of CLIP-KD: An Empirical Study of CLIP Model Distillation☆124Updated this week
- Exploring Visual Prompts for Adapting Large-Scale Models☆282Updated 3 years ago
- ☆100Updated last year
- [ICCV2023] - CTP: Towards Vision-Language Continual Pretraining via Compatible Momentum Contrast and Topology Preservation☆36Updated 10 months ago
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆102Updated 2 years ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆149Updated last year
- Official code for ICLR 2024 paper, "A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation"☆80Updated last year
- ☆11Updated 5 months ago