deel-ai / deel-lip
Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers
β95Updated last month
Alternatives and similar repositories for deel-lip:
Users that are interested in deel-lip are comparing it to the libraries listed below
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ62Updated last year
- β35Updated 2 weeks ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β56Updated this week
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β29Updated last month
- π Overcomplete is a Vision-based SAE Toolboxβ51Updated 3 weeks ago
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β28Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- β13Updated 2 years ago
- π Xplique is a Neural Networks Explainability Toolboxβ682Updated 6 months ago
- LENS Projectβ47Updated last year
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β62Updated last year
- β10Updated 5 months ago
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β321Updated 3 months ago
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 2 years ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computationβ124Updated last week
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β238Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ124Updated 10 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ243Updated 8 months ago
- A toolkit for quantitative evaluation of data attribution methods.β44Updated this week
- Conformal prediction for uncertainty quantification in image segmentationβ22Updated 4 months ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β222Updated 8 months ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ149Updated 3 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ63Updated 2 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β148Updated 2 years ago
- Local explanations with uncertainty π!β39Updated last year
- Model Agnostic Counterfactual Explanationsβ88Updated 2 years ago
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classiβ¦β129Updated 2 years ago
- (ICML 2023) Feature learning in deep classifiers through Intermediate Neural Collapse: Accompanying codeβ13Updated last year
- Explores the ideas presented in Deep Ensembles: A Loss Landscape Perspective (https://arxiv.org/abs/1912.02757) by Stanislav Fort, Huiyi β¦β65Updated 4 years ago
- Last-layer Laplace approximation code examplesβ83Updated 3 years ago