deel-ai / influenciaeLinks
π Influenciae is a Tensorflow Toolbox for Influence Functions
β63Updated last year
Alternatives and similar repositories for influenciae
Users that are interested in influenciae are comparing it to the libraries listed below
Sorting:
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ96Updated 2 months ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β57Updated 3 weeks ago
- β37Updated this week
- New implementations of old orthogonal layers unlock large scale training.β17Updated last week
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β30Updated 3 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- π CODS - Conformal Object Detection and Segmentationβ12Updated this week
- π Overcomplete is a Vision-based SAE Toolboxβ57Updated 2 months ago
- π Xplique is a Neural Networks Explainability Toolboxβ689Updated 7 months ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- LENS Projectβ48Updated last year
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β330Updated last week
- β11Updated last month
- Conformal prediction for uncertainty quantification in image segmentationβ23Updated 5 months ago
- CoSy: Evaluating Textual Explanationsβ16Updated 4 months ago
- β13Updated 2 years ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β66Updated 2 years ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.β17Updated last year
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ129Updated 3 weeks ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ129Updated 11 months ago
- A toolkit for quantitative evaluation of data attribution methods.β47Updated last month
- An amortized approach for calculating local Shapley value explanationsβ97Updated last year
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β64Updated last year
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 2 years ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 9 months ago
- β13Updated 2 weeks ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ599Updated 3 months ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β241Updated 2 years ago
- A fairness library in PyTorch.β29Updated 10 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ66Updated 2 years ago