itaybenou / show-and-tellLinks
[CVPR 2025] Official implementation of the paper "Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models" (by Benou and Riklin-Raviv): https://arxiv.org/abs/2502.20134.
☆16Updated 7 months ago
Alternatives and similar repositories for show-and-tell
Users that are interested in show-and-tell are comparing it to the libraries listed below
Sorting:
- Medical Imaging Benchmarks for Out-Of-Distribution Detection☆40Updated last week
- ☆49Updated 11 months ago
- [ECCV 2024] Soft Prompt Generation for Domain Generalization☆30Updated last year
- Code for [CVPR 2024] Each Test Image Deserves A Specific Prompt: Continual Test-Time Adaptation for 2D Medical Image Segmentation.☆75Updated last year
- Collection of Unsupervised Learning Methods for Vision-Language Models (VLMs)☆80Updated 2 weeks ago
- ☆40Updated last year
- ☆64Updated 3 months ago
- Official Implement of the paper "Unifying Segment Anything in Microscopy with Multimodal Large Language Model"☆20Updated last month
- [CVPR 2024] Official Repository for "Efficient Test-Time Adaptation of Vision-Language Models"☆112Updated last year
- The official pytorch implemention of our CVPR-2024 paper "MMA: Multi-Modal Adapter for Vision-Language Models".☆95Updated 9 months ago
- [MICCAI 2023][Early Accept] Official code repository of paper titled "Cross-modulated Few-shot Image Generation for Colorectal Tissue Cla…☆47Updated 2 years ago
- Official implementations of our LaZSL (ICCV'25)☆39Updated 6 months ago
- [CVPR 2024] Zero-shot method for Vision-Language Models based on a robust formulation of the MeanShift algorithm for Test-time Augmentati…☆64Updated last year
- [ECCV 2024] Mind the Interference: Retaining Pre-trained Knowledge in Parameter Efficient Continual Learning of Vision-Language Models☆56Updated last year
- [NeurIPS2023] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning☆102Updated 6 months ago
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆80Updated 7 months ago
- [AAAI'25, CVPRW 2024] Official repository of paper titled "Learning to Prompt with Text Only Supervision for Vision-Language Models".☆120Updated last year
- The official implementation for paper: Vision-Language Models are Strong Noisy Label Detectors☆15Updated 10 months ago
- [ACCV 2024] ObjectCompose: Evaluating Resilience of Vision-Based Models on Object-to-Background Compositional Changes 🚀🚀🚀☆37Updated last year
- [CVPR 2025] Understanding Fine-tuning CLIP for Open-vocabulary Semantic Segmentation in Hyperbolic Space☆36Updated 6 months ago
- [ECCV' 24 Oral] CLIFF: Continual Latent Diffusion for Open-Vocabulary Object Detection☆29Updated last year
- [MICCAI 2023] Official code repository of paper titled "Frequency Domain Adversarial Training for Robust Volumetric Medical Segmentation"…☆52Updated 2 years ago
- Pytorch implementation of "Test-time Adaption against Multi-modal Reliability Bias".☆44Updated last year
- ☆46Updated last year
- This is an official implementation for Finer-CAM: Spotting the Difference Reveals Finer Details for Visual Explanation. [CVPR'25]☆46Updated 2 months ago
- [NeurIPS 2025 Datasets & Benchmarks Track] The Illusion of Progress? A Critical Look at Test-Time Adaptation for Vision-Language Models☆31Updated 3 months ago
- [CVPR 2024] Code for our Paper "DeiT-LT: Distillation Strikes Back for Vision Transformer training on Long-Tailed Datasets"☆47Updated last year
- An easy way to apply LoRA to CLIP. Implementation of the paper "Low-Rank Few-Shot Adaptation of Vision-Language Models" (CLIP-LoRA) [CVPR…☆282Updated 7 months ago
- [TPAMI 2026] Advances in Multimodal Adaptation and Generalization: From Traditional Approaches to Foundation Models☆168Updated this week
- [NeurIPS 2024] WATT: Weight Average Test-Time Adaptation of CLIP☆56Updated last year