KathPra / GCBM
Official repo for arxiv paper: Aligning Visual and Semantic Interpretability through Visually Grounded Concept Bottleneck Models
β18Updated 2 months ago
Alternatives and similar repositories for GCBM:
Users that are interested in GCBM are comparing it to the libraries listed below
- This repo contains the data used in "Towards Understanding Climate Change Perceptions: A Social Media Dataset"β14Updated 6 months ago
- The repo to the according paper DSEG-LIME for hierarchical semantic segmentation based explanation.β16Updated last month
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β62Updated last year
- The official repository for CosPGD: a unified white-box adversarial attack for pixel-wise prediction tasks.β11Updated 6 months ago
- β11Updated 8 months ago
- Official repository for our paper Robust Models are less Over-Confidentβ20Updated last year
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β35Updated 4 months ago
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.β29Updated 2 weeks ago
- β11Updated 5 months ago
- This is the official implementation of the Concept Discovery Models paper.β13Updated last year
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).β21Updated last month
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023β73Updated 9 months ago
- Code for FrequencyLowCut Pooling (FLC pooling)β20Updated 10 months ago
- A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept datβ¦β88Updated 11 months ago
- CVPR 2023: Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classificationβ87Updated 9 months ago
- Code for "Learning Where To Look β Generative NAS is Surprisingly Efficient"β15Updated 2 years ago
- FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods (ICCV 2023)β15Updated 10 months ago
- The official code release for Unsupervised Out-of-distribution Detection with Diffusion Inpainting (ICML 2023)β26Updated last year
- ICLR 2024: Energy-Based Concept Bottleneck Models: Unifying Prediction, Concept Intervention, and Probabilistic Interpretationsβ16Updated last month
- β17Updated 6 months ago
- Concept Relevance Propagation for Localization Models, accepted at SAIAD workshop at CVPR 2023.β14Updated last year
- Learning Bottleneck Concepts in Image Classification (CVPR 2023)β37Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ30Updated last year
- PIP-Net: Patch-based Intuitive Prototypes Network for Interpretable Image Classification (CVPR 2023)β66Updated last year
- [ICLR 2024 Spotlight] Neuron Activation Coverage: Rethinking Out-of-distribution Detection and Generalizationβ27Updated 4 months ago
- Explaining Deep Convolutional Neural Networks via Unsupervised Visual-Semantic Filter Attention (CVPR 2022)β19Updated 2 years ago
- Reveal to Revise: An Explainable AI Life Cycle for Iterative Bias Correction of Deep Models. Paper presented at MICCAI 2023 conference.β19Updated last year
- Official PyTorch code for "Out-of-distribution detection with denoising diffusion models"β47Updated 8 months ago
- Official code for "Good Teachers Explain: Explanation-enhanced Knowledge Distillation", ECCV 2024β18Updated 4 months ago