mlvlab / SugaFormerLinks
Official Implementation (Pytorch) of "Super-class guided Transformer for Zero-Shot Attribute Classification", AAAI 2025
☆14Updated 6 months ago
Alternatives and similar repositories for SugaFormer
Users that are interested in SugaFormer are comparing it to the libraries listed below
Sorting:
- [CVPR 2025 Highlight] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆48Updated last month
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆41Updated 11 months ago
- Official implementation of CVPR 2024 paper "Multi-criteria Token Fusion with One-step-ahead Attention for Efficient Vision Transformers".☆39Updated last week
- The efficient tuning method for VLMs☆80Updated last year
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- Official implementation of CVPR 2024 paper "Prompt Learning via Meta-Regularization".☆28Updated 5 months ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆80Updated last year
- MELTR: Meta Loss Transformer for Learning to Fine-tune Video Foundation Models (CVPR 2023)☆35Updated last year
- Open-Vocabulary Video Question Answering: A New Benchmark for Evaluating the Generalizability of Video Question Answering Models (ICCV 20…☆18Updated last year
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆50Updated 2 weeks ago
- Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models (AAAI 2024)☆74Updated 6 months ago
- [ECCV 2024] Official PyTorch implementation of LUT "Learning with Unmasked Tokens Drives Stronger Vision Learners"☆12Updated 8 months ago
- Python code to implement DeIL, a CLIP based approach for open-world few-shot learning.☆17Updated 9 months ago
- Official Implementation of "Semantics-Consistent Feature Search for Self-Supervised Visual Representation Learning" in AAAI2024.☆13Updated last year
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆28Updated 4 months ago
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆139Updated last year
- The official code for "TextRefiner: Internal Visual Feature as Efficient Refiner for Vision-Language Models Prompt Tuning" | [AAAI2025]☆42Updated 5 months ago
- [CVPR 2024] Official repository of ST_GT☆9Updated 10 months ago
- Official Repository for CVPR 2024 Paper: "Large Language Models are Good Prompt Learners for Low-Shot Image Classification"☆37Updated last year
- [CVPR2024 Highlight] Official repository of the paper "The devil is in the fine-grained details: Evaluating open-vocabulary object detect…☆58Updated 4 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆85Updated last year
- Official Pytorch implementation of "E2VPT: An Effective and Efficient Approach for Visual Prompt Tuning". (ICCV2023)☆71Updated last year
- [COLING'25] HGCLIP: Exploring Vision-Language Models with Graph Representations for Hierarchical Understanding☆42Updated 8 months ago
- ☆35Updated last year
- [ICLR 2025] - Cross the Gap: Exposing the Intra-modal Misalignment in CLIP via Modality Inversion☆50Updated 3 months ago
- Validating image classification benchmark results on ViTs and ResNets (v2)☆12Updated 2 years ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆41Updated last year
- ☆22Updated 2 years ago
- [ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation☆62Updated last month
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling☆100Updated 3 months ago