credo-ai / credoai_lensLinks
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
☆47Updated last year
Alternatives and similar repositories for credoai_lens
Users that are interested in credoai_lens are comparing it to the libraries listed below
Sorting:
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆189Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated last year
- The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.☆203Updated last week
- FairPrep is a design and evaluation framework for fairness-enhancing interventions that treats data as a first-class citizen.☆11Updated 2 years ago
- Editing machine learning models to reflect human knowledge and values☆127Updated last year
- A toolkit that streamlines and automates the generation of model cards☆437Updated 2 years ago
- Project for open sourcing research efforts on Backward Compatibility in Machine Learning☆73Updated last year
- A collection of machine learning model cards and datasheets.☆77Updated 3 weeks ago
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆115Updated 3 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated 2 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 8 months ago
- Bias Auditing & Fair ML Toolkit☆727Updated 3 months ago
- A library that implements fairness-aware machine learning algorithms☆126Updated 4 years ago
- AI Data Management & Evaluation Platform☆216Updated last year
- Metrics to evaluate quality and efficacy of synthetic datasets.☆246Updated last week
- AI Verify☆28Updated this week
- this repo might get accepted☆28Updated 4 years ago
- Automated prompt-based testing and evaluation of Gen AI applications☆153Updated 5 months ago
- A Natural Language Interface to Explainable Boosting Machines☆68Updated last year
- The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning wo…☆171Updated 2 years ago
- Creating the tools and data sets necessary to evaluate vulnerabilities in LLMs.☆25Updated 5 months ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆121Updated 2 years ago
- Tensorflow's Fairness Evaluation and Visualization Toolkit☆353Updated 3 weeks ago
- This is an open-source tool to assess and improve the trustworthiness of AI systems.☆94Updated 2 weeks ago
- Interpret Community extends Interpret repository with additional interpretability techniques and utility functions to handle real-world d…☆434Updated 6 months ago
- The Foundation Model Transparency Index☆82Updated last year
- Find and fix bugs in natural language machine learning models using adaptive testing.☆185Updated last year
- Public blueprints for data use cases☆84Updated 2 weeks ago
- Practical ideas on securing machine learning models☆36Updated 4 years ago
- Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰☆97Updated last year