deel-ai / influenciaeLinks
π Influenciae is a Tensorflow Toolbox for Influence Functions
β64Updated last year
Alternatives and similar repositories for influenciae
Users that are interested in influenciae are comparing it to the libraries listed below
Sorting:
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ100Updated 9 months ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β60Updated last month
- β38Updated 3 months ago
- New implementations of old orthogonal layers unlock large scale training.β26Updated 3 months ago
- π Xplique is a Neural Networks Explainability Toolboxβ725Updated 2 weeks ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ40Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β239Updated 5 months ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ141Updated 4 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ140Updated last year
- π‘ Adversarial attacks on explanations and how to defend themβ332Updated last year
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β366Updated last month
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ634Updated 5 months ago
- scikit-activeml: A Comprehensive and User-friendly Active Learning Libraryβ181Updated this week
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β255Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ70Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- A fairness library in PyTorch.β32Updated last year
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β32Updated 3 years ago
- An amortized approach for calculating local Shapley value explanationsβ104Updated 2 years ago
- LENS Projectβ51Updated last year
- Reliability diagrams visualize whether a classifier model needs calibrationβ164Updated 3 years ago
- A toolkit for quantitative evaluation of data attribution methods.β54Updated 5 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.β166Updated last year
- π Overcomplete is a Vision-based SAE Toolboxβ112Updated last month
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty iβ¦β268Updated 3 months ago
- Domain adaptation toolbox compatible with scikit-learn and pytorchβ151Updated 3 weeks ago
- For calculating Shapley values via linear regression.β72Updated 4 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β71Updated 2 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithmsβ299Updated 2 years ago