deel-ai / CraftLinks
π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)
β65Updated last year
Alternatives and similar repositories for Craft
Users that are interested in Craft are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ63Updated 3 months ago
- LENS Projectβ48Updated last year
- β11Updated last month
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β63Updated last month
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023β78Updated last year
- Official PyTorch implementation of improved B-cos modelsβ50Updated last year
- β39Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ35Updated last year
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).β25Updated 4 months ago
- π Aligning Human & Machine Vision using explainabilityβ52Updated last year
- Code for the paper "A Whac-A-Mole Dilemma Shortcuts Come in Multiples Where Mitigating One Amplifies Others"β48Updated 11 months ago
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.β31Updated 3 weeks ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β45Updated 7 months ago
- Re-implementation of the StylEx paper, training a GAN to explain a classifier in StyleSpace, paper by Lang et al. (2021).β37Updated last year
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled cβ¦β103Updated last year
- [ICLR 23 spotlight] An automatic and efficient tool to describe functionalities of individual neurons in DNNsβ52Updated last year
- A toolkit for quantitative evaluation of data attribution methods.β48Updated this week
- FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods (ICCV 2023)β21Updated 2 months ago
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ32Updated last year
- β45Updated 2 years ago
- Code release for paper Extremely Simple Activation Shaping for Out-of-Distribution Detectionβ54Updated 9 months ago
- Natural Language Descriptions of Deep Visual Features, ICLR 2022β65Updated last year
- Code and data for the paper "In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation"β24Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ129Updated last year
- β107Updated last year
- β13Updated 2 years ago
- B-cos Networks: Alignment is All we Need for Interpretabilityβ109Updated last year
- Updated code base for GlanceNets: Interpretable, Leak-proof Concept-based modelsβ25Updated last year
- Recycling diverse modelsβ44Updated 2 years ago
- Distilling Model Failures as Directions in Latent Spaceβ47Updated 2 years ago