JAEarly / MILLILinks
Code for the paper "Model Agnostic Interpretability for Multiple Instance Learning".
☆13Updated 3 years ago
Alternatives and similar repositories for MILLI
Users that are interested in MILLI are comparing it to the libraries listed below
Sorting:
- This repository contains the implementation of Label-Free XAI, a new framework to adapt explanation methods to unsupervised models. For m…☆23Updated 2 years ago
- An Empirical Framework for Domain Generalization In Clinical Settings☆30Updated 3 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆54Updated 2 years ago
- Code for "Consistent Estimators for Learning to Defer to an Expert" (ICML 2020)☆13Updated 2 years ago
- Combating hidden stratification with GEORGE☆63Updated 4 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- SimTriplet: PyTorch Implementation☆27Updated 4 years ago
- ☆46Updated 4 years ago
- Active and Sample-Efficient Model Evaluation☆24Updated 2 weeks ago
- The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper.☆40Updated 3 years ago
- Repository for our NeurIPS 2022 paper "Concept Embedding Models: Beyond the Accuracy-Explainability Trade-Off" and our NeurIPS 2023 paper…☆62Updated last week
- ☆17Updated 6 years ago
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆41Updated 2 years ago
- ☆35Updated 4 years ago
- Code for the paper "Fuzzy c-Means Clustering for Persistence Diagrams"☆14Updated last year
- A benchmark for distribution shift in tabular data☆52Updated last year
- Code to study the generalisability of benchmark models on non-stationary EHRs.☆14Updated 5 years ago
- ☆17Updated last year
- Official repository for the AAAI-21 paper 'Explainable Models with Consistent Interpretations'☆18Updated 3 years ago
- Code for Quantifying Ignorance in Individual-Level Causal-Effect Estimates under Hidden Confounding☆22Updated 2 years ago
- Code for the ICLR 2022 paper "Attention-based interpretability with Concept Transformers"☆40Updated 3 weeks ago
- Model-agnostic posthoc calibration without distributional assumptions☆42Updated last year
- Code for the paper "Getting a CLUE: A Method for Explaining Uncertainty Estimates"☆35Updated last year
- Learning clinical-decision rules with interpretable models.☆20Updated last year
- B-LRP is the repository for the paper How Much Can I Trust You? — Quantifying Uncertainties in Explaining Neural Networks☆18Updated 2 years ago
- ☆36Updated 3 years ago
- Gifsplanation - Explaining neural networks with gifs!☆26Updated last year
- ☆8Updated 3 years ago
- Quantile risk minimization☆24Updated 9 months ago
- TorchEsegeta: Interpretability and Explainability pipeline for PyTorch☆20Updated last year