dylan-slack / TalkToModel
TalkToModel gives anyone with the powers of XAI through natural language conversations 💬!
☆120Updated last year
Alternatives and similar repositories for TalkToModel:
Users that are interested in TalkToModel are comparing it to the libraries listed below
- A Natural Language Interface to Explainable Boosting Machines☆65Updated 8 months ago
- Responsible AI knowledge base☆99Updated last year
- OpenXAI : Towards a Transparent Evaluation of Model Explanations☆240Updated 7 months ago
- Testing Language Models for Memorization of Tabular Datasets.☆33Updated last month
- Extending Conformal Prediction to LLMs☆64Updated 9 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆63Updated 2 years ago
- A python package for benchmarking interpretability techniques on Transformers.☆213Updated 5 months ago
- Fairness toolkit for pytorch, scikit learn and autogluon☆32Updated 3 months ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆41Updated 3 weeks ago
- Course for Interpreting ML Models☆52Updated 2 years ago
- Interpret text data using LLMs (scikit-learn compatible).☆163Updated last week
- Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)☆82Updated 2 years ago
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.☆123Updated 4 months ago
- A novel approach for synthesizing tabular data using pretrained large language models☆304Updated 4 months ago
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, maturity models, newsletters, principles, podc…☆68Updated this week
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- Repository for research in the field of Responsible NLP at Meta.☆196Updated 4 months ago
- Official implementation of the ACL 2024: Scientific Inspiration Machines Optimized for Novelty☆78Updated 11 months ago
- ☆49Updated last year
- Training and evaluating NBM and SPAM for interpretable machine learning.☆77Updated 2 years ago
- ☆263Updated 2 months ago
- ☆68Updated last year
- Editing machine learning models to reflect human knowledge and values☆124Updated last year
- Code repository for the NAACL 2022 paper "ExSum: From Local Explanations to Model Understanding"☆64Updated 2 years ago
- ☆22Updated 2 years ago
- Medical Hallucination in Foundation Models and Their Impact on Healthcare (2025)☆36Updated last week
- Mixture of Decision Trees for Interpretable Machine Learning☆11Updated 3 years ago
- Python package to compute interaction indices that extend the Shapley Value. AISTATS 2023.☆17Updated last year
- TimeLMs: Diachronic Language Models from Twitter☆108Updated last year
- ☆58Updated last year