mit-ll-responsible-ai / responsible-ai-toolboxLinks
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies
☆57Updated last year
Alternatives and similar repositories for responsible-ai-toolbox
Users that are interested in responsible-ai-toolbox are comparing it to the libraries listed below
Sorting:
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆251Updated last year
- PyTorch Explain: Interpretable Deep Learning in Python.☆164Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation☆140Updated 2 months ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆67Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆138Updated last year
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆186Updated 2 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆54Updated 4 months ago
- ☆75Updated 2 years ago
- ☆139Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆39Updated last year
- 💡 Adversarial attacks on explanations and how to defend them☆330Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- Optimal Transport Dataset Distance☆173Updated 3 years ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 8 months ago
- Dataset and code for the CLEVR-XAI dataset.☆33Updated 2 years ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆77Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆236Updated 4 months ago
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆266Updated 2 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 3 years ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆72Updated 2 years ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆125Updated 2 years ago
- A Python Data Valuation Package☆30Updated 2 years ago
- Advances in Neural Information Processing Systems (NeurIPS 2021)☆22Updated 3 years ago
- Testing Language Models for Memorization of Tabular Datasets.☆36Updated 9 months ago
- ☆35Updated 2 years ago
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆631Updated 4 months ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Updated 2 weeks ago
- Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization☆344Updated last year