deel-ai / influenciae
π Influenciae is a Tensorflow Toolbox for Influence Functions
β56Updated 7 months ago
Related projects β
Alternatives and complementary repositories for influenciae
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ89Updated last month
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β52Updated this week
- β27Updated 7 months ago
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β27Updated last week
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β300Updated this week
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ108Updated last week
- π Xplique is a Neural Networks Explainability Toolboxβ647Updated last month
- β13Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ30Updated 7 months ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β27Updated 2 years ago
- β14Updated this week
- Conformal prediction for uncertainty quantification in image segmentationβ14Updated last month
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β229Updated last year
- bayesian limeβ16Updated 3 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ558Updated last week
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ232Updated 3 months ago
- Model-agnostic posthoc calibration without distributional assumptionsβ42Updated last year
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β56Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.β76Updated last year
- PyTorch Explain: Interpretable Deep Learning in Python.β145Updated 6 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β202Updated 4 months ago
- relplot: Utilities for measuring calibration and plotting reliability diagramsβ134Updated 5 months ago
- Code for "NODE-GAM: Neural Generalized Additive Model for Interpretable Deep Learning"β43Updated 2 years ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ137Updated 2 years ago
- TabDPT: Scaling Tabular Foundation Modelsβ10Updated 3 weeks ago
- A toolkit for quantitative evaluation of data attribution methods.β33Updated this week
- An amortized approach for calculating local Shapley value explanationsβ92Updated 11 months ago
- An interactive framework to visualize and analyze your AutoML process in real-time.β72Updated 2 weeks ago
- scikit-activeml: Python library for active learning on top of scikit-learnβ155Updated this week
- β115Updated 7 months ago