microsoft / responsible-ai-toolbox-trackerLinks
A JupyterLab extension for tracking, managing, and comparing Responsible AI mitigations and experiments.
☆45Updated 2 years ago
Alternatives and similar repositories for responsible-ai-toolbox-tracker
Users that are interested in responsible-ai-toolbox-tracker are comparing it to the libraries listed below
Sorting:
- Python library for implementing Responsible AI mitigations.☆66Updated last year
- Repo to hold examples of responsible model assessment for a variety of different verticals such as healthcare and financial services☆65Updated last year
- A Repository for the public preview of Responsible AI in AML vNext☆9Updated 3 months ago
- This is an open-source tool to assess and improve the trustworthiness of AI systems.☆92Updated 3 weeks ago
- Responsible AI knowledge base☆103Updated 2 years ago
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆74Updated last week
- Examples and recipes around federated learning in Azure ML.☆68Updated last year
- Generates synthetic data and user interfaces for privacy-preserving data sharing and analysis.☆120Updated last year
- Self-verification for LLMs.☆64Updated last year
- A data discovery and manipulation toolset for unstructured data☆54Updated last year
- Providing tools and templates to facilitate modern MLOps practices☆84Updated 9 months ago
- Generating and validating natural-language explanations for the brain.☆52Updated 2 months ago
- RAI is a python library that is written to help AI developers in various aspects of responsible AI development.☆58Updated 11 months ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆120Updated last year
- Testing Language Models for Memorization of Tabular Datasets.☆33Updated 3 months ago
- ML-based medical imaging using Azure☆129Updated 2 years ago
- 📖 A curated list of resources dedicated to synthetic data☆129Updated 2 years ago
- A Natural Language Interface to Explainable Boosting Machines☆67Updated 11 months ago
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆186Updated last year
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆94Updated last year
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated 3 weeks ago
- Responsible AI Workshop: a series of tutorials & walkthroughs to illustrate how put responsible AI into practice☆44Updated 3 months ago
- The Foundation Model Transparency Index☆79Updated last year
- 🤗Transformers: State-of-the-art Natural Language Processing for Pytorch and TensorFlow 2.0.☆29Updated last year
- Medical Hallucination in Foundation Models and Their Impact on Healthcare (2025)☆57Updated 2 months ago
- This repository contains different samples applications and sample code to help you get started with the different Health-AI services. Ba…☆45Updated 4 months ago
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆150Updated this week
- Editing machine learning models to reflect human knowledge and values☆124Updated last year
- Project for open sourcing research efforts on Backward Compatibility in Machine Learning☆73Updated last year
- ☆68Updated this week