deel-ai / Craft
π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)
β62Updated last year
Alternatives and similar repositories for Craft:
Users that are interested in Craft are comparing it to the libraries listed below
- π Overcomplete is a Vision-based SAE Toolboxβ42Updated this week
- LENS Projectβ47Updated last year
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paperβ¦β59Updated last month
- FunnyBirds: A Synthetic Vision Dataset for a Part-Based Analysis of Explainable AI Methods (ICCV 2023)β20Updated 3 weeks ago
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.β30Updated last week
- A new framework to transform any neural networks into an interpretable concept-bottleneck-model (CBM) without needing labeled concept datβ¦β91Updated 11 months ago
- Natural Language Descriptions of Deep Visual Features, ICLR 2022β62Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023β74Updated 10 months ago
- Code for the paper "A Whac-A-Mole Dilemma Shortcuts Come in Multiples Where Mitigating One Amplifies Others"β48Updated 8 months ago
- Official PyTorch implementation of improved B-cos modelsβ47Updated last year
- Official code for "Can We Talk Models Into Seeing the World Differently?" (ICLR 2025).β21Updated last month
- β64Updated 5 months ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β38Updated 4 months ago
- Code release for paper Extremely Simple Activation Shaping for Out-of-Distribution Detectionβ52Updated 6 months ago
- Uncertainty-aware representation learning (URL) benchmarkβ102Updated 2 weeks ago
- Official code for the ICML 2024 paper "The Entropy Enigma: Success and Failure of Entropy Minimization"β49Updated 9 months ago
- β108Updated last year
- β35Updated 2 years ago
- π Aligning Human & Machine Vision using explainabilityβ51Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ124Updated 9 months ago
- PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures (CVPR 2022)β105Updated 2 years ago
- B-cos Networks: Alignment is All we Need for Interpretabilityβ107Updated last year
- An automatic and efficient tool to describe functionalities of individual neurons in DNNsβ46Updated last year
- Code and data for the paper "In or Out? Fixing ImageNet Out-of-Distribution Detection Evaluation"β24Updated last year
- β43Updated 4 months ago
- β44Updated 2 years ago
- β10Updated 4 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated 11 months ago
- β39Updated 10 months ago
- Code for Continuously Changing Corruptions (CCC) benchmark + evaluationβ34Updated 7 months ago