tonychenxyz / vit-interpretLinks
Official implementation of "Interpreting and Controlling Vision Foundation Models via Text Explanations"
☆13Updated last year
Alternatives and similar repositories for vit-interpret
Users that are interested in vit-interpret are comparing it to the libraries listed below
Sorting:
- Code release for "Understanding Bias in Large-Scale Visual Datasets"☆21Updated 9 months ago
- repo for paper titled: Towards Realistic Zero-Shot Classification via Self Structural Semantic Alignment (AAAI'24 Oral)☆25Updated last year
- Augmenting with Language-guided Image Augmentation (ALIA)☆79Updated last year
- [WACV2025 Oral] DeepMIM: Deep Supervision for Masked Image Modeling☆53Updated 4 months ago
- ☆35Updated last year
- [TIP] Exploring Effective Factors for Improving Visual In-Context Learning☆19Updated 2 months ago
- Code base of SynthCLIP: CLIP training with purely synthetic text-image pairs from LLMs and TTIs.☆100Updated 5 months ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆178Updated last year
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆58Updated 2 years ago
- ☆27Updated last year
- [ECCVW 2024 -- ORAL] Official repository of paper titled "Makeup-Guided Facial Privacy Protection via Untrained Neural Network Priors".☆12Updated 11 months ago
- ☆60Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆42Updated 9 months ago
- COLA: Evaluate how well your vision-language model can Compose Objects Localized with Attributes!☆24Updated 9 months ago
- [ICLR 2024] Official code for the paper "LLM Blueprint: Enabling Text-to-Image Generation with Complex and Detailed Prompts"☆81Updated last year
- Code implementation of our NeurIPS 2023 paper: Vocabulary-free Image Classification☆107Updated last year
- Official repository for LLaVA-Reward (ICCV 2025): Multimodal LLMs as Customized Reward Models for Text-to-Image Generation☆20Updated last month
- Compress conventional Vision-Language Pre-training data☆52Updated last year
- [CVPR 2023] Zero-shot Generative Model Adaptation via Image-specific Prompt Learning☆83Updated 2 years ago
- [NIPS2023]Implementation of Foundation Model is Efficient Multimodal Multitask Model Selector☆37Updated last year
- Training code for CLIP-FlanT5☆29Updated last year
- A curated list of papers & resources linked to concept learning☆13Updated 2 years ago
- [CVPR 2024 Highlight] ImageNet-D☆43Updated 11 months ago
- Repo for our NeurIPS 2023 paper on: Divide, Evaluate, and Refine: Evaluating and Improving Text-to-Image Alignment with Iterative VQA Fee…☆26Updated last year
- ☆11Updated 3 years ago
- [CVPR2024 Highlight] Official implementation for Transferable Visual Prompting. The paper "Exploring the Transferability of Visual Prompt…☆44Updated 9 months ago
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆73Updated 2 years ago
- [CVPR 2023] Improving Zero-shot Generalization and Robustness of Multi-modal Models☆34Updated 2 years ago
- [CBMI2024 Best Paper] Official repository of the paper "Is CLIP the main roadblock for fine-grained open-world perception?".☆28Updated 4 months ago
- Test-Time Distribution Normalization For Contrastively Learned Vision-language Models☆27Updated last year