deel-ai / deel-lipLinks
Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers
β96Updated 2 months ago
Alternatives and similar repositories for deel-lip
Users that are interested in deel-lip are comparing it to the libraries listed below
Sorting:
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ63Updated last year
- β37Updated last week
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β30Updated 3 months ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β58Updated 3 weeks ago
- New implementations of old orthogonal layers unlock large scale training.β17Updated last week
- π CODS - Conformal Object Detection and Segmentationβ12Updated this week
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- π Xplique is a Neural Networks Explainability Toolboxβ689Updated 7 months ago
- π Overcomplete is a Vision-based SAE Toolboxβ57Updated 2 months ago
- β13Updated 2 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotligβ¦β150Updated 2 years ago
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β330Updated last week
- Public repo for course material on Bayesian machine learning at ENS Paris-Saclay and Univ Lilleβ87Updated 3 months ago
- Reliability diagrams visualize whether a classifier model needs calibrationβ150Updated 3 years ago
- LENS Projectβ48Updated last year
- [NeurIPS 2023] and [ICLR 2024] for robustness certification.β10Updated 6 months ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β241Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ129Updated 11 months ago
- Conformal prediction for uncertainty quantification in image segmentationβ23Updated 5 months ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β66Updated 2 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β64Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ247Updated 9 months ago
- Model Agnostic Counterfactual Explanationsβ87Updated 2 years ago
- Optimal Transport Dataset Distanceβ164Updated 3 years ago
- A fairness library in PyTorch.β29Updated 10 months ago
- Model-agnostic posthoc calibration without distributional assumptionsβ42Updated last year
- Parameter-Free Optimizers for Pytorchβ129Updated last year
- Large-scale uncertainty benchmark in deep learning.β60Updated 3 weeks ago
- Materials of the Nordic Probabilistic AI School 2023.β90Updated last year