mit-ll-responsible-ai / responsible-ai-toolbox
PyTorch-centric library for evaluating and enhancing the robustness of AI technologies
☆56Updated last year
Alternatives and similar repositories for responsible-ai-toolbox
Users that are interested in responsible-ai-toolbox are comparing it to the libraries listed below
Sorting:
- MetaQuantus is an XAI performance tool to identify reliable evaluation metrics☆34Updated last year
- Explore/examine/explain/expose your model with the explabox!☆16Updated 2 weeks ago
- ☆34Updated last year
- This repository holds code and other relevant files for the NeurIPS 2022 tutorial: Foundational Robustness of Foundation Models.☆70Updated 2 years ago
- ☆137Updated last year
- A benchmark of data-centric tasks from across the machine learning lifecycle.☆72Updated 2 years ago
- PyTorch code corresponding to my blog series on adversarial examples and (confidence-calibrated) adversarial training.☆68Updated 2 years ago
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated last week
- ☆68Updated last year
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 5 months ago
- ModelDiff: A Framework for Comparing Learning Algorithms☆56Updated last year
- Dataset and code for the CLEVR-XAI dataset.☆31Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆246Updated 9 months ago
- Conformal prediction for controlling monotonic risk functions. Simple accompanying PyTorch code for conformal risk control in computer vi…☆66Updated 2 years ago
- As part of the Explainable AI Toolkit (XAITK), XAITK-Saliency is an open source, explainable AI framework for visual saliency algorithm i…☆91Updated last week
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- A toolkit for quantitative evaluation of data attribution methods.☆45Updated last month
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated last year
- 👋 Code for : "CRAFT: Concept Recursive Activation FacTorization for Explainability" (CVPR 2023)☆62Updated last year
- Extending Conformal Prediction to LLMs☆66Updated 10 months ago
- An eXplainable AI toolkit with Concept Relevance Propagation and Relevance Maximization☆126Updated 11 months ago
- Official PyTorch Implementation for Meaning Representations from Trajectories in Autoregressive Models (ICLR 2024)☆20Updated last year
- Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control☆66Updated 6 months ago
- PyTorch Explain: Interpretable Deep Learning in Python.☆154Updated last year
- ☆54Updated last year
- Spurious Features Everywhere - Large-Scale Detection of Harmful Spurious Features in ImageNet☆31Updated last year
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆66Updated 2 years ago
- 🛠️ Corrected Test Sets for ImageNet, MNIST, CIFAR, Caltech-256, QuickDraw, IMDB, Amazon Reviews, 20News, and AudioSet☆184Updated 2 years ago
- This repository contains a Jax implementation of conformal training corresponding to the ICLR'22 paper "learning optimal conformal classi…☆129Updated 2 years ago
- NeuroSurgeon is a package that enables researchers to uncover and manipulate subnetworks within models in Huggingface Transformers☆41Updated 3 months ago