AthenaCore / AwesomeResponsibleAILinks
A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible, Trustworthy, and Human-Centered AI.
☆84Updated this week
Alternatives and similar repositories for AwesomeResponsibleAI
Users that are interested in AwesomeResponsibleAI are comparing it to the libraries listed below
Sorting:
- Responsible AI knowledge base☆106Updated 2 years ago
- This repository provides a curated list of references about Machine Learning Model Governance, Ethics, and Responsible AI.☆117Updated last year
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆124Updated 2 years ago
- LangFair is a Python library for conducting use-case level LLM bias and fairness assessments☆232Updated last week
- 📖 A curated list of resources dedicated to synthetic data☆134Updated 3 years ago
- Introduction to Data-Centric AI, MIT IAP 2024 🤖☆103Updated 2 months ago
- Fiddler Auditor is a tool to evaluate language models.☆187Updated last year
- Open-Source Software, Tutorials, and Research on Data-Centric AI 🤖☆339Updated last year
- Official code repo for the O'Reilly Book - Machine Learning for High-Risk Applications☆102Updated 2 years ago
- This is an open-source tool to assess and improve the trustworthiness of AI systems.☆96Updated last week
- A Natural Language Interface to Explainable Boosting Machines☆68Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆192Updated 5 months ago
- A suite of auto-regressive and Seq2Seq (sequence-to-sequence) transformer models for tabular and relational synthetic data generation.☆233Updated 2 months ago
- Course for Interpreting ML Models☆52Updated 2 years ago
- An open-source compliance-centered evaluation framework for Generative AI models☆163Updated last week
- Learn how to monitor ML systems to identify and mitigate sources of drift before model performance decay.☆91Updated 3 years ago
- Sample notebooks and prompts for LLM evaluation☆138Updated 3 months ago
- Material for the series of seminars on Large Language Models☆34Updated last year
- Experimental library integrating LLM capabilities to support causal analyses☆244Updated last month
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆93Updated last year
- Evaluate uncertainty, calibration, accuracy, and fairness of LLMs on real-world survey data!☆24Updated 5 months ago
- Learn how to create reliable ML systems by testing code, data and models.☆89Updated 3 years ago
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆139Updated 3 weeks ago
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated last year
- nbsynthetic is simple and robust tabular synthetic data generation library for small and medium size datasets☆68Updated 2 years ago
- Examples of using Evidently to evaluate, test and monitor ML models.☆39Updated last month
- An index of all of our weekly concepts + code events for aspiring AI Engineers and Business Leaders!!☆86Updated this week
- ☆39Updated 2 years ago
- Includes examples on how to evaluate LLMs☆23Updated 10 months ago
- The Data Cards Playbook helps dataset producers and publishers adopt a people-centered approach to transparency in dataset documentation.☆189Updated last year