deel-ai / oodeelLinks
Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.
β58Updated 3 weeks ago
Alternatives and similar repositories for oodeel
Users that are interested in oodeel are comparing it to the libraries listed below
Sorting:
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ63Updated last year
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ96Updated 2 months ago
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β30Updated 3 months ago
- New implementations of old orthogonal layers unlock large scale training.β17Updated last week
- β37Updated last week
- π CODS - Conformal Object Detection and Segmentationβ12Updated this week
- π Xplique is a Neural Networks Explainability Toolboxβ689Updated 7 months ago
- π Overcomplete is a Vision-based SAE Toolboxβ57Updated 2 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ34Updated last year
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β330Updated last week
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 2 years ago
- Conformal prediction for uncertainty quantification in image segmentationβ23Updated 5 months ago
- β13Updated 2 years ago
- β11Updated last month
- Open-source framework for uncertainty and deep learning models in PyTorchβ393Updated last week
- Confident Object Detection via Conformal Prediction and Conformal Risk Control: an Application to Railway Signalingβ11Updated 2 years ago
- LENS Projectβ48Updated last year
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertaintyβ141Updated 2 years ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β64Updated last year
- Reliability diagrams visualize whether a classifier model needs calibrationβ150Updated 3 years ago
- HCOMP '22 -- Eliciting and Learning with Soft Labels from Every Annotatorβ10Updated 2 years ago
- A toolkit for quantitative evaluation of data attribution methods.β47Updated last month
- Official PyTorch implementation of improved B-cos modelsβ48Updated last year
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true claβ¦β241Updated 2 years ago
- β13Updated 3 weeks ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"β161Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ599Updated 3 months ago
- Uncertainty-aware representation learning (URL) benchmarkβ105Updated 2 months ago
- π½ Out-of-Distribution Detection with PyTorchβ293Updated 3 weeks ago
- Code for the paper: Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery. ECCV 2024.β44Updated 7 months ago