credo-ai / credoai_lensLinks
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
☆47Updated last year
Alternatives and similar repositories for credoai_lens
Users that are interested in credoai_lens are comparing it to the libraries listed below
Sorting:
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆186Updated last year
- FairPrep is a design and evaluation framework for fairness-enhancing interventions that treats data as a first-class citizen.☆11Updated 2 years ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆94Updated last year
- A library that implements fairness-aware machine learning algorithms☆126Updated 4 years ago
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆112Updated 3 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Editing machine learning models to reflect human knowledge and values☆127Updated last year
- A toolkit for tools and techniques related to the privacy and compliance of AI models.☆105Updated 2 months ago
- A toolkit that streamlines and automates the generation of model cards☆435Updated last year
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- Comparing fairness-aware machine learning techniques.☆159Updated 2 years ago
- Bias Auditing & Fair ML Toolkit☆725Updated 2 months ago
- this repo might get accepted☆28Updated 4 years ago
- A visual analytic system for fair data-driven decision making☆25Updated 2 years ago
- Privacy Testing for Deep Learning☆205Updated last year
- Tensorflow's Fairness Evaluation and Visualization Toolkit☆352Updated 3 weeks ago
- AI Data Management & Evaluation Platform☆215Updated last year
- Project for open sourcing research efforts on Backward Compatibility in Machine Learning☆73Updated last year
- Python library for implementing Responsible AI mitigations.☆66Updated last year
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆23Updated 3 months ago
- Metrics to evaluate quality and efficacy of synthetic datasets.☆241Updated this week
- The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning wo…☆172Updated 2 years ago
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆30Updated 3 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 7 months ago
- SDNist: Benchmark data and evaluation tools for data synthesizers.☆36Updated last month
- MLOps Cookiecutter Template: A Base Project Structure for Secure Production ML Engineering☆41Updated 8 months ago
- ☆269Updated last year
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- Explore/examine/explain/expose your model with the explabox!☆17Updated this week
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago