π Xplique is a Neural Networks Explainability Toolbox
β738Feb 24, 2026Updated last month
Alternatives and similar repositories for xplique
Users that are interested in xplique are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- π Influenciae is a Tensorflow Toolbox for Influence Functionsβ66Mar 17, 2026Updated last week
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layersβ101Mar 14, 2025Updated last year
- Simple, compact, and hackable post-hoc deep OOD detection for already trained tensorflow or pytorch image classifiers.β60Feb 17, 2026Updated last month
- π Code for the paper: "Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis" (NeurIPS 2021)β33Jul 18, 2022Updated 3 years ago
- π Puncc is a python library for predictive uncertainty quantification using conformal prediction.β377Updated this week
- Wordpress hosting with auto-scaling on Cloudways β’ AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- π Overcomplete is a Vision-based SAE Toolboxβ127Dec 4, 2025Updated 3 months ago
- π Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)β72Jul 20, 2023Updated 2 years ago
- Build and train Lipschitz-constrained networks: PyTorch implementation of 1-Lipschitz layers. For TensorFlow/Keras implementation, see htβ¦β42Updated this week
- LENS Projectβ52Feb 22, 2024Updated 2 years ago
- π Aligning Human & Machine Vision using explainabilityβ53Jul 14, 2023Updated 2 years ago
- β39Sep 15, 2025Updated 6 months ago
- β16May 1, 2025Updated 10 months ago
- A runway dataset and a generator of synthetic aerial images with automatic labeling.β129Dec 15, 2025Updated 3 months ago
- New implementations of old orthogonal layers unlock large scale training.β29Sep 19, 2025Updated 6 months ago
- End-to-end encrypted email - Proton Mail β’ AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanationsβ649Mar 9, 2026Updated 2 weeks ago
- OmniXAI: A Library for eXplainable AIβ963Jul 23, 2024Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ252Aug 17, 2024Updated last year
- π CODS - Conformal Object Detection and Segmentationβ21Dec 15, 2025Updated 3 months ago
- Learning to Estimate Shapley Values with Vision Transformersβ37Mar 4, 2026Updated 3 weeks ago
- Generic Engine for Multi-disciplinary Scenarios, Exploration and Optimization. This is a MIRROR of our gitlab repository, the developmentβ¦β32Updated this week
- [CVPRW 2024] Conformal prediction for uncertainty quantification in image segmentationβ26Dec 9, 2024Updated last year
- β17Aug 17, 2021Updated 4 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metricsβ44Apr 17, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient β’ AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- πͺ Interpreto is an interpretability toolbox for LLMsβ161Updated this week
- β¨π² Hierarchical extreme multiclass and multi-label classification.β18Jan 5, 2023Updated 3 years ago
- [MICCAI 2024] DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Dataβ18Apr 3, 2025Updated 11 months ago
- Interpretability for sequence generation models π πβ462Mar 6, 2026Updated 3 weeks ago
- PiML (Python Interpretable Machine Learning) toolbox for model development & diagnosticsβ1,284Mar 30, 2025Updated 11 months ago
- Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.β17Jun 13, 2022Updated 3 years ago
- Codebase for information theoretic shapley values to explain predictive uncertainty.This repo contains the code related to the paperWatsoβ¦β22Jul 4, 2024Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.β242Jan 30, 2026Updated last month
- A toolbox to iNNvestigate neural networks' predictions!β1,307Apr 11, 2025Updated 11 months ago
- Proton VPN Special Offer - Get 70% off β’ AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Explanation Optimizationβ13Oct 16, 2020Updated 5 years ago
- A collection of research materials on explainable AI/MLβ1,624Mar 7, 2026Updated 2 weeks ago
- Interpretability and explainability of data and machine learning modelsβ1,771Mar 18, 2026Updated last week
- A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also inβ¦β760Aug 25, 2020Updated 5 years ago
- Experimental toolbox for quantum Shapley values.β10Jan 2, 2024Updated 2 years ago
- A lightweight implementation of removal-based explanations for ML models.β59Jul 19, 2021Updated 4 years ago
- Integrated Grad-CAM (submitted to ICASSP2021 conference)β19Feb 8, 2021Updated 5 years ago