leoandeol / codsLinks
π CODS - Conformal Object Detection and Segmentation
β17Updated last week
Alternatives and similar repositories for cods
Users that are interested in cods are comparing it to the libraries listed below
Sorting:
- [CVPRW 2024] Conformal prediction for uncertainty quantification in image segmentationβ25Updated 10 months ago
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β60Updated 2 weeks ago
- Uncertainty-aware representation learning (URL) benchmarkβ105Updated 7 months ago
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β34Updated last month
- πͺ Interpreto is an interpretability toolbox for LLMsβ35Updated this week
- β37Updated 3 weeks ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ100Updated 6 months ago
- Code repository for the paper "Adaptive Bounding Box Uncertainties via Two-Step Conformal Prediction" @ ECCV 2024 (Oral)β13Updated 5 months ago
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ64Updated last year
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β67Updated 2 years ago
- π Overcomplete is a Vision-based SAE Toolboxβ90Updated 2 months ago
- DiffuLab is designed to provide a simple and flexible way to train diffusion models while allowing full customization of its core componeβ¦β38Updated last week
- Large-scale uncertainty benchmark in deep learning.β63Updated 5 months ago
- Open-source framework for uncertainty and deep learning models in PyTorchβ433Updated this week
- Official code for "Enabling Uncertainty Estimation in Iterative Neural Networks" (ICML 2024)β18Updated last year
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β30Updated 3 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximizationβ133Updated last year
- Code of the paper "Beyond calibration: estimating the grouping loss of modern neural networks" published in ICLR 2023.β12Updated last year
- [NeurIPS 2024] Code for the paper: B-cosification: Transforming Deep Neural Networks to be Inherently Interpretable.β35Updated 4 months ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertaintyβ144Updated 2 years ago
- Code for Principal Masked Autoencodersβ29Updated 6 months ago
- Classfication metrics and post-hoc calibrationβ36Updated 3 months ago
- Toolkit for explaining the detections of an object detectorβ14Updated 3 years ago
- Self-Supervised Learning in PyTorchβ141Updated last year
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ39Updated last year
- Deep Networks Grok All the Time and Here is Whyβ37Updated last year
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer viβ¦β69Updated 2 years ago
- π Xplique is a Neural Networks Explainability Toolboxβ702Updated last year
- A PyTorch Lightning extension that accelerates and enhances foundation model experimentation with flexible fine-tuning schedules.β66Updated last month
- [ICCV25] Official Implementation of LeGradβ80Updated 11 months ago