credo-ai / credoai_lensLinks
Credo AI Lens is a comprehensive assessment framework for AI systems. Lens standardizes model and data assessment, and acts as a central gateway to assessments created in the open source community.
☆47Updated 11 months ago
Alternatives and similar repositories for credoai_lens
Users that are interested in credoai_lens are comparing it to the libraries listed below
Sorting:
- FairPrep is a design and evaluation framework for fairness-enhancing interventions that treats data as a first-class citizen.☆11Updated 2 years ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆94Updated last year
- The AI Incident Database seeks to identify, define, and catalog artificial intelligence incidents.☆189Updated this week
- Editing machine learning models to reflect human knowledge and values☆124Updated last year
- MirrorDataGenerator is a python tool that generates synthetic data based on user-specified causal relations among features in the data. I…☆23Updated 2 years ago
- The Foundation Model Transparency Index☆79Updated last year
- Project for open sourcing research efforts on Backward Compatibility in Machine Learning☆73Updated last year
- Fiddler Auditor is a tool to evaluate language models.☆181Updated last year
- A Natural Language Interface to Explainable Boosting Machines☆67Updated 11 months ago
- Unified slicing for all Python data structures.☆35Updated 3 months ago
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆186Updated last year
- Lint for privacy☆27Updated 2 years ago
- A library that implements fairness-aware machine learning algorithms☆124Updated 4 years ago
- Modular Python Toolbox for Fairness, Accountability and Transparency Forensics☆77Updated last year
- this repo might get accepted☆28Updated 4 years ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- Practical examples of "Flawed Machine Learning Security" together with ML Security best practice across the end to end stages of the mach…☆110Updated 3 years ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 5 months ago
- ☆43Updated last year
- MLOps Cookiecutter Template: A Base Project Structure for Secure Production ML Engineering☆41Updated 6 months ago
- ☆10Updated 2 years ago
- A JupyterLab extension for tracking, managing, and comparing Responsible AI mitigations and experiments.☆45Updated 2 years ago
- Proposal Documents for Fairlearn☆9Updated 4 years ago
- This is an open-source tool to assess and improve the trustworthiness of AI systems.☆92Updated 3 weeks ago
- ☆30Updated 2 years ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆22Updated last month
- A collection of implementations of fair ML algorithms☆12Updated 7 years ago
- FairCVtest: Testbed for Fair Automatic Recruitment and Multimodal Bias Analysis☆18Updated last year
- This package features data-science related tasks for developing new recognizers for Presidio. It is used for the evaluation of the entire…☆217Updated this week