mit-ll-responsible-ai / responsible-ai-toolboxLinks
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies
☆57Updated last year
Alternatives and similar repositories for responsible-ai-toolbox
Users that are interested in responsible-ai-toolbox are comparing it to the libraries listed below
Sorting:
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 11 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆159Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆97Updated 4 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆53Updated 2 weeks ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆59Updated last year
- Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization☆342Updated last year
- Testing Language Models for Memorization of Tabular Datasets.☆34Updated 5 months ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation☆133Updated 2 months ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 3 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆68Updated 2 years ago
- ☆139Updated last year
- Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations☆612Updated last week
- Uncertainty Quantification 360 (UQ360) is an extensible open-source toolkit that can help you estimate, communicate and use uncertainty i…☆267Updated 2 months ago
- 💡 Adversarial attacks on explanations and how to defend them☆321Updated 8 months ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆71Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆131Updated last year
- Neural Pipeline Search (NePS): Helps deep learning experts find the best neural pipeline.☆77Updated this week
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆185Updated 2 years ago
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆66Updated 2 years ago
- List of relevant resources for machine learning from explanatory supervision☆158Updated 2 weeks ago
- scikit-activeml: Python library for active learning on top of scikit-learn☆171Updated this week
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆37Updated last year
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆130Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆228Updated 3 weeks ago
- For calculating Shapley values via linear regression.☆69Updated 4 years ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆121Updated 2 years ago
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆292Updated last year
- Uncertainty-aware representation learning (URL) benchmark☆105Updated 4 months ago