gabrieljaguiar / image-meta-feature-extractorLinks
An image meta-feature extractor for meta-learning tasks.
☆13Updated 2 years ago
Alternatives and similar repositories for image-meta-feature-extractor
Users that are interested in image-meta-feature-extractor are comparing it to the libraries listed below
Sorting:
- Post-hoc Nemenyi test for algorithm statistical comparison.☆22Updated 5 years ago
- Meaningful Local Explanation for Machine Learning Models☆42Updated 2 years ago
- Extended Complexity Library in R☆58Updated 4 years ago
- Code for the paper "Calibrating Deep Neural Networks using Focal Loss"☆161Updated last year
- Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human…☆74Updated 3 years ago
- Reliability diagrams visualize whether a classifier model needs calibration☆161Updated 3 years ago
- ☆16Updated 6 years ago
- DISTIL: Deep dIverSified inTeractIve Learning. An active/inter-active learning library built on py-torch for reducing labeling costs.☆154Updated 2 years ago
- Calibration of Convolutional Neural Networks☆170Updated 2 years ago
- Wrapper for a PyTorch classifier which allows it to output prediction sets. The sets are theoretically guaranteed to contain the true cla…☆252Updated 2 years ago
- Detect model's attention☆169Updated 5 years ago
- Calibration library and code for the paper: Verified Uncertainty Calibration. Ananya Kumar, Percy Liang, Tengyu Ma. NeurIPS 2019 (Spotlig…☆151Updated 3 years ago
- Reference tables to introduce and organize evaluation methods and measures for explainable machine learning systems☆75Updated 3 years ago
- A repo for transfer learning with deep tabular models☆104Updated 2 years ago
- Papers and code of Explainable AI esp. w.r.t. Image classificiation☆223Updated 3 years ago
- Python Meta-Feature Extractor package.☆136Updated 3 months ago
- NumPy library for calibration metrics☆73Updated last month
- Wasserstein Adversarial Active Learning☆29Updated 5 years ago
- Code repository for our paper "Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift": https://arxiv.org/abs/1810.119…☆107Updated last year
- Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" ht…☆128Updated 4 years ago
- Domain adaptation made easy. Fully featured, modular, and customizable.☆389Updated 2 years ago
- Code for Deterministic Neural Networks with Appropriate Inductive Biases Capture Epistemic and Aleatoric Uncertainty☆145Updated 2 years ago
- ☆122Updated 3 years ago
- The net:cal calibration framework is a Python 3 library for measuring and mitigating miscalibration of uncertainty estimates, e.g., by a …☆368Updated last year
- A pytorch implemention of the Explainable AI work 'Contrastive layerwise relevance propagation (CLRP)'☆17Updated 3 years ago
- An amortized approach for calculating local Shapley value explanations☆102Updated last year
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)☆236Updated 3 years ago
- Original dataset release for CIFAR-10H☆82Updated 5 years ago
- Code for "Uncertainty Estimation Using a Single Deep Deterministic Neural Network"☆273Updated 3 years ago
- Combating hidden stratification with GEORGE☆64Updated 4 years ago