deel-ai / influenciae
π Influenciae is a Tensorflow Toolbox for Influence Functions
β57Updated 9 months ago
Alternatives and similar repositories for influenciae:
Users that are interested in influenciae are comparing it to the libraries listed below
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ90Updated last month
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β53Updated last month
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β27Updated this week
- β27Updated 9 months ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ115Updated this week
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β311Updated last week
- π Xplique is a Neural Networks Explainability Toolboxβ657Updated 3 months ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β27Updated 2 years ago
- Conformal prediction for uncertainty quantification in image segmentationβ18Updated last month
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ32Updated 9 months ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β235Updated last year
- Model-agnostic posthoc calibration without distributional assumptionsβ42Updated last year
- Launching and monitoring Slurm experiments in Pythonβ15Updated last week
- β13Updated 2 years ago
- Specificy, execute and monitor performances of active learning pipelines.β21Updated 3 months ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.β17Updated last year
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β143Updated 2 years ago
- Domain adaptation toolbox compatible with scikit-learn and pytorchβ97Updated last month
- π½ Out-of-Distribution Detection with PyTorchβ270Updated this week
- Fast and incremental explanations for online machine learning models. Works best with the river framework.β52Updated 3 weeks ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ237Updated 5 months ago
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 2 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β61Updated last year
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environmentsβ73Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ573Updated 2 months ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty iβ¦β257Updated 4 months ago
- LENS Projectβ44Updated 10 months ago
- A fairness library in PyTorch.β26Updated 5 months ago
- Model Agnostic Counterfactual Explanationsβ87Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β208Updated 5 months ago