deel-ai / CraftLinks
π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)
β64Updated last year
Alternatives and similar repositories for Craft
Users that are interested in Craft are comparing it to the libraries listed below
Sorting:
- π Overcomplete is a Vision-based SAE Toolboxβ57Updated 2 months ago
- LENS Projectβ48Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNetβ32Updated last year
- Code for the paper "A Whac-A-Mole Dilemma Shortcuts Come in Multiples Where Mitigating One Amplifies Others"β48Updated 10 months ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β62Updated last week
- β11Updated last month
- β39Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023β77Updated last year
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.β30Updated this week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- [ICLR 23 spotlight] An automatic and efficient tool to describe functionalities of individual neurons in DNNsβ50Updated last year
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).β24Updated 4 months ago
- Official PyTorch implementation of improved B-cos modelsβ48Updated last year
- β72Updated 7 months ago
- Natural Language Descriptions of Deep Visual Features, ICLR 2022β65Updated last year
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β44Updated 7 months ago
- Re-implementation of the StylEx paper, training a GAN to explain a classifier in StyleSpace, paper by Lang et al. (2021).β37Updated last year
- Code and data for the paper "In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation"β24Updated last year
- What do we learn from inverting CLIP models?β54Updated last year
- β107Updated last year
- β13Updated 2 years ago
- B-cos Networks: Alignment is All we Need for Interpretabilityβ109Updated last year
- β45Updated 2 years ago
- [ICLR 23] A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled cβ¦β102Updated last year
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"β51Updated last year
- Sparse Linear Concept Embeddingsβ98Updated 2 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ129Updated 11 months ago
- Personal implementation of ASIF by Antonio Norelliβ25Updated last year
- A toolkit for quantitative evaluation of data attribution methods.β47Updated last month
- Code for the CCE algorithm proposed in "Towards Compositionality in Concept Learning" at ICML 2024.β15Updated last year