credo-ai / credoai_lens
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
☆47Updated 9 months ago
Alternatives and similar repositories for credoai_lens:
Users that are interested in credoai_lens are comparing it to the libraries listed below
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated last year
- FairPrep is a design and evaluation framework for fairness-enhancing interventions that treats data as a first-class citizen.☆11Updated 2 years ago
- A collection of machine learning model cards and datasheets.☆75Updated 9 months ago
- MirrorDataGenerator is a python tool that generates synthetic data based on user-specified causal relations among features in the data. I…☆21Updated 2 years ago
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆178Updated 10 months ago
- This repository provides a curated list of references about Machine Learning Model Governance, Ethics, and Responsible AI.☆114Updated 11 months ago
- AI Data Management & Evaluation Platform☆215Updated last year
- Python library for implementing Responsible AI mitigations.☆65Updated last year
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- Editing machine learning models to reflect human knowledge and values☆124Updated last year
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆105Updated 2 years ago
- Project for open sourcing research efforts on Backward Compatibility in Machine Learning☆73Updated last year
- A toolkit that streamlines and automates the generation of model cards☆430Updated last year
- The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning wo…☆171Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆178Updated last year
- A software package for privacy-preserving generation of a synthetic twin to a given sensitive data set.☆51Updated 7 months ago
- Lint for privacy☆26Updated 2 years ago
- this repo might get accepted☆28Updated 4 years ago
- MLOps Cookiecutter Template: A Base Project Structure for Secure Production ML Engineering☆40Updated 4 months ago
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- ☆30Updated 2 years ago
- FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning☆38Updated 11 months ago
- Use FastCUT with public map images and location data from a few cities to generate realistic synthetic location data for any city in the …☆23Updated 3 years ago
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆69Updated this week
- Tensorflow's Fairness Evaluation and Visualization Toolkit☆348Updated 2 months ago
- Unified slicing for all Python data structures.☆35Updated last month
- The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.☆183Updated this week
- The Foundation Model Transparency Index☆77Updated 10 months ago
- A Natural Language Interface to Explainable Boosting Machines☆65Updated 9 months ago