dylan-slack / TalkToModel
TalkToModel gives anyone with the powers of XAI through natural language conversations π¬!
β108Updated last year
Related projects: β
- A Natural Language Interface to Explainable Boosting Machinesβ59Updated 2 months ago
- A suite of auto-regressive and Seq2Seq (sequence-to-sequence) transformer models for tabular and relational synthetic data generation.β203Updated last month
- OpenXAI : Towards a Transparent Evaluation of Model Explanationsβ227Updated last month
- Scikit-learn friendly library to interpret, and prompt-engineer text datasets using large language models.β151Updated 2 months ago
- A python package for benchmarking interpretability techniques on Transformers.β207Updated 2 months ago
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.β120Updated 5 months ago
- We develop benchmarks and analysis tools to evaluate the causal reasoning abilities of LLMs.β87Updated 3 months ago
- A novel approach for synthesizing tabular data using pretrained large language modelsβ271Updated 3 months ago
- Course for Interpreting ML Modelsβ51Updated last year
- A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, toβ¦β50Updated this week
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"β109Updated 11 months ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniquesβ55Updated last year
- Responsible AI knowledge baseβ91Updated last year
- Semi-automatic feature engineering process using Language Models and your dataset descriptions. Based on the paper "LLMs for Semi-Automatβ¦β121Updated 7 months ago
- β256Updated this week
- LLM_library is a comprehensive repository serves as a one-stop resource hands-on code, insightful summaries.β68Updated 8 months ago
- Stanford CRFM's initiative to assess potential compliance with the draft EU AI Actβ92Updated 11 months ago
- β57Updated 5 months ago
- A mechanistic approach for understanding and detecting factual errors of large language models.β38Updated 2 months ago
- Training and evaluating NBM and SPAM for interpretable machine learning.β76Updated last year
- This is an open-source tool to assess and improve the trustworthiness of AI systems.β70Updated this week
- A repository for summaries of recent explainable AI/Interpretable ML approachesβ55Updated 3 weeks ago
- Testing Language Models for Memorization of Tabular Datasets.β26Updated last week
- β93Updated 3 weeks ago
- Full Stack Graph Machine Learning: Theory, Practice, Tools and Techniquesβ63Updated 3 months ago
- Experimental library integrating LLM capabilities to support causal analysesβ70Updated last week
- PyTorch package to train and audit ML models for Individual Fairnessβ63Updated last year
- This repository collects all relevant resources about interpretability in LLMsβ230Updated last week
- Fairness toolkit for pytorch, scikit learn and autogluonβ16Updated this week
- β22Updated last year