AthenaCore / AwesomeResponsibleAI
A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.
☆71Updated this week
Alternatives and similar repositories for AwesomeResponsibleAI:
Users that are interested in AwesomeResponsibleAI are comparing it to the libraries listed below
- This repository provides a curated list of references about Machine Learning Model Governance, Ethics, and Responsible AI.☆114Updated last year
- Responsible AI knowledge base☆101Updated 2 years ago
- Course for Interpreting ML Models☆52Updated 2 years ago
- A Natural Language Interface to Explainable Boosting Machines☆66Updated 9 months ago
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆120Updated last year
- Official code repo for the O'Reilly Book - Machine Learning for High-Risk Applications☆103Updated last year
- Introduction to Data-Centric AI, MIT IAP 2023 🤖☆98Updated 2 months ago
- This is an open-source tool to assess and improve the trustworthiness of AI systems.☆90Updated 2 weeks ago
- A collection of news articles, books, and papers on Responsible AI cases. The purpose is to study these cases and learn from them to avoi…☆65Updated 4 years ago
- Learn how to monitor ML systems to identify and mitigate sources of drift before model performance decay.☆84Updated 2 years ago
- Sample notebooks and prompts for LLM evaluation☆124Updated last week
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆114Updated last week
- Learn how to create reliable ML systems by testing code, data and models.☆86Updated 2 years ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆94Updated last year
- Experimental library integrating LLM capabilities to support causal analyses☆128Updated this week
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆201Updated this week
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆42Updated last month
- Interpret text data using LLMs (scikit-learn compatible).☆163Updated last month
- meta_llama_2finetuned_text_generation_summarization☆21Updated last year
- Material for the series of seminars on Large Language Models☆34Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆49Updated 9 months ago
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆181Updated 10 months ago
- nbsynthetic is simple and robust tabular synthetic data generation library for small and medium size datasets☆65Updated 2 years ago
- This repository stems from our paper, “Cataloguing LLM Evaluations”, and serves as a living, collaborative catalogue of LLM evaluation fr…☆17Updated last year
- Hands-on tutorial on ML Fairness☆71Updated last year
- ☆18Updated 4 months ago
- Table detection with Florence.☆13Updated 9 months ago
- Includes examples on how to evaluate LLMs☆23Updated 5 months ago
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆29Updated 3 years ago
- A collection of machine learning model cards and datasheets.☆75Updated 10 months ago