pietrobarbiero / entropy-lens
☆16Updated last year
Alternatives and similar repositories for entropy-lens:
Users that are interested in entropy-lens are comparing it to the libraries listed below
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆57Updated 3 weeks ago
- This repository contains the implementation of Concept Activation Regions, a new framework to explain deep neural networks with human con…☆11Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆52Updated 2 years ago
- Official repository of ICML 2023 paper: Dividing and Conquering a BlackBox to a Mixture of Interpretable Models: Route, Interpret, Repeat☆23Updated 11 months ago
- This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help …☆24Updated 2 years ago
- NeurIPS 2021 | Fine-Grained Neural Network Explanation by Identifying Input Features with Predictive Information☆32Updated 3 years ago
- Implementation of the paper "A Framework for Learning Ante-hoc Explainable Models via Concepts" (CVPR 2022).☆8Updated 7 months ago
- Discover and Cure: Concept-aware Mitigation of Spurious Correlation (ICML 2023)☆40Updated 10 months ago
- Code for "Interpretable image classification with differentiable prototypes assignment", ECCV 2022☆24Updated 2 years ago
- Codebase for SEFS: Self-Supervision Enhanced Feature Selection with Correlated Gates☆23Updated last year
- Implementation of Concept-level Debugging of Part-Prototype Networks☆11Updated last year
- Code for the paper "Post-hoc Concept Bottleneck Models". Spotlight @ ICLR 2023☆73Updated 9 months ago
- ☆30Updated 3 years ago
- CME: Concept-based Model Extraction☆12Updated 4 years ago
- Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations is a ServiceNow Research project that was started at Elemen…☆13Updated last year
- ☆24Updated last year
- [CLeaR23] Causal Triplet: An Open Challenge for Intervention-centric Causal Representation Learning☆30Updated last year
- Repository for the NeurIPS 2023 paper "Beyond Confidence: Reliable Models Should Also Consider Atypicality"☆12Updated 10 months ago
- Code for "Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties"☆18Updated 3 years ago
- [NeurIPS 23] Characterizing OOD Error via Optimal Transport☆13Updated last year
- GitHub repository for KDD 2021 work: ProtoPShare: Prototypical Parts Sharing for Similarity Discovery in Interpretable Image Classificati…☆11Updated 3 years ago
- An Empirical Framework for Domain Generalization In Clinical Settings☆29Updated 2 years ago
- Experiments to reproduce results in Interventional Causal Representation Learning.☆25Updated 2 years ago
- Local explanations with uncertainty 💐!☆39Updated last year
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆40Updated last year
- ☆42Updated 2 years ago
- [ICLR 2023, ICLR DG oral] PAIR, the optimizer and model selection criteria for OOD Generalization☆52Updated 10 months ago
- A method to generate counterfactuals☆12Updated 2 months ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆48Updated last year
- ☆10Updated this week