deel-ai / influenciaeLinks
π Influenciae is a Tensorflow Toolbox for Influence Functions
β63Updated last year
Alternatives and similar repositories for influenciae
Users that are interested in influenciae are comparing it to the libraries listed below
Sorting:
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ96Updated 4 months ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β58Updated 2 weeks ago
- β37Updated 2 weeks ago
- New implementations of old orthogonal layers unlock large scale training.β17Updated 2 weeks ago
- π Xplique is a Neural Networks Explainability Toolboxβ692Updated 9 months ago
- Documentationβ29Updated last week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ36Updated last year
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ132Updated 2 months ago
- π Overcomplete is a Vision-based SAE Toolboxβ67Updated 3 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ607Updated last week
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β31Updated 3 weeks ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 10 months ago
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β336Updated last month
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- LENS Projectβ48Updated last year
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β242Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β227Updated this week
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β150Updated 2 years ago
- A toolkit for quantitative evaluation of data attribution methods.β49Updated last week
- relplot: Utilities for measuring calibration and plotting reliability diagramsβ163Updated 2 weeks ago
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)β99Updated 5 months ago
- For calculating Shapley values via linear regression.β68Updated 4 years ago
- A repo for transfer learning with deep tabular modelsβ104Updated 2 years ago
- Influence Estimation for Gradient-Boosted Decision Treesβ29Updated last year
- A fairness library in PyTorch.β29Updated 11 months ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.β17Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithmsβ292Updated last year
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environmentsβ74Updated 2 years ago
- Open-source framework for uncertainty and deep learning models in PyTorchβ409Updated 3 weeks ago