mit-ll-responsible-ai / responsible-ai-toolboxLinks
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies
☆57Updated last year
Alternatives and similar repositories for responsible-ai-toolbox
Users that are interested in responsible-ai-toolbox are comparing it to the libraries listed below
Sorting:
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆248Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆136Updated last year
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆233Updated 3 months ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆39Updated last year
- ☆139Updated 2 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆328Updated 11 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆163Updated last year
- A toolkit for quantitative evaluation of data attribution methods.☆53Updated 3 months ago
- Dataset and code for the CLEVR-XAI dataset.☆32Updated 2 years ago
- As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm i…☆99Updated last month
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆67Updated 2 years ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 3 years ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation☆138Updated last month
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 7 months ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆266Updated last month
- ModelDiff: A Framework for Comparing Learning Algorithms☆59Updated 2 years ago
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆186Updated 2 years ago
- Optimal Transport Dataset Distance☆173Updated 3 years ago
- ☆34Updated 2 years ago
- Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization☆344Updated last year
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆129Updated 3 years ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆71Updated 2 years ago
- ARMORY Adversarial Robustness Evaluation Test Bed☆186Updated last year
- 👋 Xplique is a Neural Networks Explainability Toolbox☆703Updated last year
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆72Updated 2 years ago
- Interpret text data with LLMs (sklearn compatible).☆171Updated 3 weeks ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆68Updated 2 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago