alexlioralexli / attention-transferLinks
☆20Updated 10 months ago
Alternatives and similar repositories for attention-transfer
Users that are interested in attention-transfer are comparing it to the libraries listed below
Sorting:
- AlignCLIP: Improving Cross-Modal Alignment in CLIP (ICLR 2025)☆48Updated 6 months ago
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆83Updated last year
- CLIP-MoE: Mixture of Experts for CLIP☆46Updated 11 months ago
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆29Updated last year
- [NeurIPS 2024] MoME: Mixture of Multimodal Experts for Generalist Multimodal Large Language Models☆72Updated 4 months ago
- Official implementation for paper "Knowledge Diffusion for Distillation", NeurIPS 2023☆89Updated last year
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆61Updated last year
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆43Updated last year
- [ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.☆73Updated last year
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆58Updated 10 months ago
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆80Updated 4 months ago
- The official implementation for MTLoRA: A Low-Rank Adaptation Approach for Efficient Multi-Task Learning (CVPR '24)☆62Updated 2 months ago
- [ICLR2025] This repository is the official implementation of our Autoregressive Pretraining with Mamba in Vision☆84Updated 3 months ago
- Code for the paper Visual Explanations of Image–Text Representations via Multi-Modal Information Bottleneck Attribution☆57Updated last year
- Official implementation of Scaling Laws in Patchification: An Image Is Worth 50,176 Tokens And More☆23Updated 6 months ago
- [ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models☆171Updated last year
- [ECCV 2022] What to Hide from Your Students: Attention-Guided Masked Image Modeling☆71Updated last year
- [ICCV 2023 & AAAI 2023] Binary Adapters & FacT, [Tech report] Convpass☆192Updated 2 years ago
- [ICLR 23 oral] The Modality Focusing Hypothesis: Towards Understanding Crossmodal Knowledge Distillation☆45Updated 2 years ago
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆53Updated last year
- Code for the paper "Compositional Entailment Learning for Hyperbolic Vision-Language Models".☆82Updated 3 months ago
- ☆50Updated 4 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆90Updated last year
- [ICML 2024] Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning☆50Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆42Updated 9 months ago
- [NeurIPS 2023] Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization☆105Updated last year
- Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".☆29Updated 6 months ago
- PyTorch code for "Contrastive Region Guidance: Improving Grounding in Vision-Language Models without Training"☆36Updated last year
- ☆101Updated last year
- ☆91Updated 2 years ago