mit-ll-responsible-ai / responsible-ai-toolboxLinks
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies
☆57Updated last year
Alternatives and similar repositories for responsible-ai-toolbox
Users that are interested in responsible-ai-toolbox are comparing it to the libraries listed below
Sorting:
- A toolkit for quantitative evaluation of data attribution methods.☆54Updated 5 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆70Updated 2 years ago
- ☆76Updated 2 years ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆164Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆252Updated last year
- Official PyTorch Implementation for Meaning Representations from Trajectories in Autoregressive Models (ICLR 2024)☆22Updated last year
- ☆140Updated 2 years ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆139Updated last year
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 3 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆67Updated 2 years ago
- 💡 Adversarial attacks on explanations and how to defend them☆330Updated last year
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation☆140Updated 3 months ago
- Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using co…☆343Updated 2 years ago
- Dataset and code for the CLEVR-XAI dataset.☆33Updated 2 years ago
- Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.☆239Updated 4 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆58Updated 2 years ago
- Build and train Lipschitz constrained networks: TensorFlow implementation of k-Lipschitz layers☆100Updated 9 months ago
- Training and evaluating NBM and SPAM for interpretable machine learning.☆78Updated 2 years ago
- Testing Language Models for Memorization of Tabular Datasets.☆36Updated 10 months ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Updated last month
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆71Updated 2 years ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆72Updated 2 years ago
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆40Updated last year
- Betty: an automatic differentiation library for generalized meta-learning and multilevel optimization☆344Updated last year
- ☆35Updated 2 years ago
- Optimal Transport Dataset Distance☆173Updated 3 years ago
- ☆130Updated 4 years ago
- ARMORY Adversarial Robustness Evaluation Test Bed☆187Updated last year
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆186Updated this week
- A fast, effective data attribution method for neural networks in PyTorch☆223Updated last year