responsible-ai-collaborative / aiid
The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.
☆183Updated this week
Alternatives and similar repositories for aiid:
Users that are interested in aiid are comparing it to the libraries listed below
- Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central …☆47Updated 9 months ago
- Themis™ is a software fairness tester.☆103Updated 4 years ago
- ☆58Updated 4 years ago
- Twitter algorithmic bias challenge winning submission☆48Updated 3 years ago
- ☆122Updated 3 years ago
- Algorithmic Impact Assessment - Évaluation de l'incidence algorithmique (TS/JS)☆62Updated 4 months ago
- ARMORY Adversarial Robustness Evaluation Test Bed☆177Updated last year
- A collection of machine learning model cards and datasheets.☆75Updated 9 months ago
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- An open-source compliance-centered evaluation framework for Generative AI models☆142Updated 4 months ago
- The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning wo…☆171Updated last year
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- FairPrep is a design and evaluation framework for fairness-enhancing interventions that treats data as a first-class citizen.☆11Updated 2 years ago
- python tools to check recourse in linear classification☆75Updated 4 years ago
- The code processes URLs in an attempt to consolidate different web addresses that point to the same URL and to remove potentially private…☆23Updated 3 years ago
- AI risk ontology☆11Updated this week
- Introduction to Data-Centric AI, MIT IAP 2023 🤖☆98Updated last month
- Research code for auditing and exploring black box machine-learning models.☆132Updated last year
- Reading history for Fair ML Reading Group in Melbourne☆36Updated 3 years ago
- Trust and Safety Teaching Consortium☆65Updated 4 months ago
- Digital Public Goods Standard☆120Updated 2 weeks ago
- An interactive simulation to explain algorithmic bias.☆62Updated 8 months ago
- Library and experiments for attacking machine learning in discrete domains☆45Updated 2 years ago
- Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰☆96Updated last year
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆178Updated 10 months ago
- List of references about Machine Learning bias and ethics☆61Updated last year
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated last year
- Unsupervised bias detection tool for binary AI classifiers. Including qualitative approach to assess quantitative disparities.☆21Updated 3 weeks ago
- A toolkit that streamlines and automates the generation of model cards☆430Updated last year
- A Python library for Secure and Explainable Machine Learning☆173Updated 2 months ago