googleinterns / controllabledlLinks
☆37Updated 4 years ago
Alternatives and similar repositories for controllabledl
Users that are interested in controllabledl are comparing it to the libraries listed below
Sorting:
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆31Updated 4 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"☆64Updated 3 years ago
- ☆18Updated 3 years ago
- Weakly Supervised End-to-End Learning (NeurIPS 2021)☆157Updated 2 years ago
- ☆138Updated last year
- A lightweight implementation of removal-based explanations for ML models.☆59Updated 3 years ago
- TabDPT: Scaling Tabular Foundation Models☆30Updated 2 months ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆42Updated 3 months ago
- Neural Additive Models (Google Research)☆70Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆68Updated 2 years ago
- An Empirical Study of Invariant Risk Minimization☆27Updated 4 years ago
- A Natural Language Interface to Explainable Boosting Machines☆67Updated 11 months ago
- Measuring if attention is explanation with ROAR☆22Updated 2 years ago
- Library implementing state-of-the-art Concept-based and Disentanglement Learning methods for Explainable AI☆55Updated 2 years ago
- ☆35Updated 6 months ago
- Active and Sample-Efficient Model Evaluation☆24Updated last month
- Code for gradient rollback, which explains predictions of neural matrix factorization models, as for example used for knowledge base comp…☆21Updated 4 years ago
- Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.☆50Updated 2 years ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 3 years ago
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆39Updated 2 years ago
- ☆54Updated 2 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆82Updated last year
- ☆32Updated 3 years ago
- ☆13Updated 6 years ago
- Official repository for the paper "Zero-Shot AutoML with Pretrained Models"☆47Updated last year
- Repository for Multimodal AutoML Benchmark☆66Updated 3 years ago
- MODALS: Modality-agnostic Automated Data Augmentation in the Latent Space☆41Updated 4 years ago
- Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"☆28Updated 4 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆105Updated last year