deel-ai / deel-lipLinks
Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers
β100Updated 9 months ago
Alternatives and similar repositories for deel-lip
Users that are interested in deel-lip are comparing it to the libraries listed below
Sorting:
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ64Updated last year
- β38Updated 3 months ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β60Updated last month
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β32Updated 3 years ago
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β39Updated 3 weeks ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ164Updated 3 years ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β255Updated 2 years ago
- New implementations of old orthogonal layers unlock large scale training.β26Updated 3 months ago
- π Xplique is a Neural Networks Explainability Toolboxβ725Updated 2 weeks ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ40Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β239Updated 5 months ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ634Updated 5 months ago
- Optimal Transport Dataset Distanceβ174Updated 3 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β151Updated 3 years ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty iβ¦β268Updated 3 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Updated last year
- Bayesianize: A Bayesian neural network wrapper in pytorchβ90Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ70Updated 2 years ago
- The net:cal calibration framework is a Python 3 library for measuring and mitigating miscalibration of uncertainty estimates, e.g., by a β¦β368Updated last year
- Parameter-Free Optimizers for Pytorchβ130Updated last year
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"β275Updated 3 years ago
- Public repo for course material on Bayesian machine learning at ENS Paris-Saclay and Univ Lilleβ92Updated 10 months ago
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 3 years ago
- Materials of the Nordic Probabilistic AI School 2023.β91Updated 2 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β78Updated 2 years ago
- Adversarial Black box Explainer generating Latent Exemplarsβ11Updated 3 years ago
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" htβ¦β128Updated 4 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertaintyβ145Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ140Updated last year
- β472Updated 2 months ago