googleinterns / controllabledl
☆34Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for controllabledl
- Interactive Weak Supervision: Learning Useful Heuristics for Data Labeling☆30Updated 3 years ago
- A lightweight implementation of removal-based explanations for ML models.☆57Updated 3 years ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆38Updated 7 months ago
- Neural Additive Models (Google Research)☆67Updated 3 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆76Updated last year
- Official Code Repo for the Paper: "How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions", In NeurIPS 2…☆37Updated 2 years ago
- A Natural Language Interface to Explainable Boosting Machines☆60Updated 4 months ago
- Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"☆63Updated 2 years ago
- Repository for code release of paper "Robust Variational Autoencoders for Outlier Detection and Repair of Mixed-Type Data" (AISTATS 2020)☆50Updated 4 years ago
- Skip-gram word embeddings in hyperbolic space☆34Updated 6 years ago
- ☆34Updated 11 months ago
- Extending Conformal Prediction to LLMs☆58Updated 5 months ago
- Solving the causality pairs challenge (does A cause B) with ChatGPT☆75Updated 5 months ago
- Weakly Supervised End-to-End Learning (NeurIPS 2021)☆153Updated last year
- Repository for Multimodal AutoML Benchmark☆61Updated 2 years ago
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆122Updated 2 years ago
- An Empirical Study of Invariant Risk Minimization☆28Updated 4 years ago
- diagNNose is a Python library that facilitates a broad set of tools for analysing hidden activations of neural models.☆81Updated last year
- ☆18Updated 2 years ago
- Measuring if attention is explanation with ROAR☆22Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆57Updated last year
- Beta Shapley: a Unified and Noise-reduced Data Valuation Framework for Machine Learning (AISTATS 2022 Oral)☆40Updated 2 years ago
- Implementation of experiments in paper "Learning from Rules Generalizing Labeled Exemplars" to appear in ICLR2020 (https://openreview.net…☆49Updated last year
- SPEAR: Programmatically label and build training data quickly.☆103Updated 4 months ago
- Updated code base for GlanceNets: Interpretable, Leak-proof Concept-based models☆25Updated last year
- Code for paper "When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data"☆14Updated 3 years ago
- A practical Active Learning python package with a strong focus on experiments.☆51Updated 2 years ago
- Model zoo for different kinds of uncertainty quantification methods used in Natural Language Processing, implemented in PyTorch.☆47Updated last year
- Hyperbolic PCA via Horospherical Projections☆68Updated last year
- ☆30Updated 2 years ago