holistic-ai / holisticaiLinks
This is an open-source tool to assess and improve the trustworthiness of AI systems.
☆92Updated last month
Alternatives and similar repositories for holisticai
Users that are interested in holisticai are comparing it to the libraries listed below
Sorting:
- TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!☆121Updated last year
- Responsible AI knowledge base☆104Updated 2 years ago
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆75Updated this week
- PyTorch package to train and audit ML models for Individual Fairness☆66Updated last month
- Experimental library integrating LLM capabilities to support causal analyses☆216Updated 2 weeks ago
- 📖 A curated list of resources dedicated to synthetic data☆131Updated 2 years ago
- A Natural Language Interface to Explainable Boosting Machines☆67Updated 11 months ago
- AI Verify☆18Updated last week
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆247Updated 10 months ago
- An open-source compliance-centered evaluation framework for Generative AI models☆153Updated this week
- Editing machine learning models to reflect human knowledge and values☆126Updated last year
- A suite of auto-regressive and Seq2Seq (sequence-to-sequence) transformer models for tabular and relational synthetic data generation.☆230Updated 3 weeks ago
- Fiddler Auditor is a tool to evaluate language models.☆183Updated last year
- Testing Language Models for Memorization of Tabular Datasets.☆33Updated 4 months ago
- pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation☆132Updated last month
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆68Updated 2 years ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Act☆94Updated last year
- CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms☆291Updated last year
- 📚 A curated list of papers & technical articles on AI Quality & Safety☆184Updated 2 months ago
- Client interface to Cleanlab Studio and the Trustworthy Language Model☆32Updated 4 months ago
- The Foundation Model Transparency Index☆81Updated last year
- Introduction to Data-Centric AI, MIT IAP 2023 🤖☆100Updated 4 months ago
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Benchmarks for the Evaluation of LLM Supervision☆32Updated 2 months ago
- Metrics to evaluate quality and efficacy of synthetic datasets.☆237Updated this week
- This repo accompanies the FF22 research cycle focused on unsupervised methods for detecting concept drift☆30Updated 3 years ago
- ☆43Updated 7 months ago
- [Experimental] Causal graphs that are networkx-compliant for the py-why ecosystem.☆56Updated this week
- A library of Reversible Data Transforms☆127Updated this week
- A Causal AI package for causal graphs.☆60Updated 2 months ago