ExplainableML / DeViLLinks
GCPR 2023 - DeViL: Decoding Vision features into Language
☆12Updated 2 years ago
Alternatives and similar repositories for DeViL
Users that are interested in DeViL are comparing it to the libraries listed below
Sorting:
- [CVPR 2022] HINT: Hierarchical Neuron Concept Explainer☆20Updated 2 years ago
- Official Code Release for "Diagnosing and Rectifying Vision Models using Language" (ICLR 2023)☆34Updated 2 years ago
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).☆26Updated 8 months ago
- Learning Bottleneck Concepts in Image Classification (CVPR 2023)☆40Updated last year
- FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods (ICCV 2023)☆21Updated 6 months ago
- Official repository for the ICCV 2023 paper: "Waffling around for Performance: Visual Classification with Random Words and Broad Concepts…☆59Updated 2 years ago
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆84Updated last year
- [ICLR 23 spotlight] An automatic and efficient tool to describe functionalities of individual neurons in DNNs☆55Updated last year
- Create generated datasets and train robust classifiers☆36Updated 2 years ago
- Code for the ICCV 2023 paper "Benchmarking Low-Shot Robustness to Natural Distribution Shifts"☆11Updated last year
- LANCE: Stress-testing Visual Models by Generating Language-guided Counterfactual Images☆32Updated last year
- ☆120Updated 2 years ago
- [NeurIPS 2023] Official Pytorch code for LOVM: Language-Only Vision Model Selection☆21Updated last year
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled c…☆113Updated last year
- The Continual Learning in Multimodality Benchmark☆67Updated 2 years ago
- ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition☆40Updated 2 years ago
- ☆65Updated 3 years ago
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification☆98Updated last year
- [CVPR 2023] Learning Visual Representations via Language-Guided Sampling☆149Updated 2 years ago
- B-cos Networks: Alignment is All we Need for Interpretability☆110Updated last year
- This repository contains the code and datasets for our ICCV-W paper 'Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts…☆29Updated last year
- ImageNetV2 Pytorch Dataset☆41Updated 2 years ago
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆103Updated 2 years ago
- ☆27Updated last year
- Mind the Gap: Understanding the Modality Gap in Multi-modal Contrastive Representation Learning☆162Updated 3 years ago
- Official PyTorch implementation of improved B-cos models☆53Updated 3 weeks ago
- Learning to compose soft prompts for compositional zero-shot learning.☆91Updated last month
- This repo implements the CVPR23 paper Trainable Projected Gradient Method for Robust Fine-tuning☆24Updated last year
- [NeurIPS 24] A new training and evaluation framework for learning interpretable deep vision models and benchmarking different interpretab…☆22Updated 4 months ago
- ☆59Updated 2 years ago