mit-ll-responsible-ai / responsible-ai-toolbox
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies
☆51Updated 9 months ago
Related projects ⓘ
Alternatives and complementary repositories for responsible-ai-toolbox
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆30Updated 7 months ago
- ☆65Updated last year
- PyTorch Explain: Interpretable Deep Learning in Python.☆146Updated 6 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆57Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.☆76Updated last year
- Fairness toolkit for pytorch, scikit learn and autogluon☆22Updated 2 weeks ago
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆70Updated last year
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆118Updated 5 months ago
- A fast, effective data attribution method for neural networks in PyTorch☆179Updated this week
- Discount jupyter.☆42Updated 2 years ago
- ☆16Updated 3 months ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆233Updated 3 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆53Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆75Updated last year
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆91Updated last year
- Tools for studying developmental interpretability in neural networks.☆77Updated last week
- LENS Project☆42Updated 9 months ago
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 2 years ago
- A centralized place for deep thinking code and experiments☆77Updated last year
- ☆117Updated 2 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆67Updated last year
- PyTorch package to train and audit ML models for Individual Fairness☆63Updated last year
- ☆50Updated last year
- Mixture of Decision Trees for Interpretable Machine Learning☆11Updated 3 years ago
- A toolkit for quantitative evaluation of data attribution methods.☆33Updated this week
- ☆134Updated last year
- Dataset and code for the CLEVR-XAI dataset.☆28Updated last year
- ☆141Updated last year
- ☆34Updated 3 years ago