mit-ll-responsible-ai / responsible-ai-toolboxLinks
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies
☆57Updated last year
Alternatives and similar repositories for responsible-ai-toolbox
Users that are interested in responsible-ai-toolbox are comparing it to the libraries listed below
Sorting:
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆70Updated 2 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 5 months ago
- ☆60Updated 3 years ago
- ☆70Updated 2 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆68Updated 2 years ago
- ☆147Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆32Updated last year
- A Python library for Secure and Explainable Machine Learning☆177Updated 4 months ago
- A toolkit for quantitative evaluation of data attribution methods.☆47Updated last month
- Explore/examine/explain/expose your model with the explabox!☆16Updated last month
- ModelDiff: A Framework for Comparing Learning Algorithms☆56Updated last year
- Mixture of Decision Trees for Interpretable Machine Learning☆11Updated 3 years ago
- ☆34Updated last year
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 2 years ago
- Testing Language Models for Memorization of Tabular Datasets.☆33Updated 3 months ago
- ☆28Updated 2 years ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆66Updated 2 years ago
- Distilling Model Failures as Directions in Latent Space☆47Updated 2 years ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation☆129Updated 3 weeks ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆66Updated 2 years ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- ☆138Updated last year
- A Unified Approach to Evaluate and Compare Explainable AI methods☆14Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 9 months ago
- As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm i…☆91Updated 3 weeks ago
- Official repository for CMU Machine Learning Department's 10732: Robustness and Adaptivity in Shifting Environments☆74Updated 2 years ago
- Discount jupyter.☆51Updated 2 months ago
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆42Updated last year