credo-ai / credoai_lensLinks
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
☆47Updated last year
Alternatives and similar repositories for credoai_lens
Users that are interested in credoai_lens are comparing it to the libraries listed below
Sorting:
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆196Updated last year
- The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.☆217Updated this week
- FairPrep is a design and evaluation framework for fairness-enhancing interventions that treats data as a first-class citizen.☆11Updated 2 years ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated 2 years ago
- A toolkit that streamlines and automates the generation of model cards☆441Updated 2 years ago
- Editing machine learning models to reflect human knowledge and values☆127Updated 2 years ago
- Bias Auditing & Fair ML Toolkit☆739Updated last week
- Fiddler Auditor is a tool to evaluate language models.☆188Updated last year
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆125Updated 2 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆33Updated last month
- Tensorflow's Fairness Evaluation and Visualization Toolkit☆358Updated 4 months ago
- Project for open sourcing research efforts on Backward Compatibility in Machine Learning☆75Updated 2 years ago
- Python library for implementing Responsible AI mitigations.☆68Updated last year
- A library that implements fairness-aware machine learning algorithms☆126Updated 5 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- AI Data Management & Evaluation Platform☆216Updated 2 years ago
- The Foundation Model Transparency Index☆83Updated this week
- This is an open-source tool to assess and improve the trustworthiness of AI systems.☆98Updated last week
- A python package for benchmarking interpretability techniques on Transformers.☆214Updated last year
- A Natural Language Interface to Explainable Boosting Machines☆68Updated last year
- The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning wo…☆172Updated 2 years ago
- ☆271Updated last year
- A collection of machine learning model cards and datasheets.☆82Updated last month
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- AI Verify☆39Updated last week
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆121Updated 3 years ago
- Synthetic data generators for structured and unstructured text, featuring differentially private learning.☆669Updated 5 months ago
- FairVis: Visual Analytics for Discovering Intersectional Bias in Machine Learning☆39Updated last year
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆26Updated this week
- Metrics to evaluate quality and efficacy of synthetic datasets.☆254Updated 2 weeks ago