GenAISHAP / TokenSHAPLinks
A framework for interpreting modern AI systems using Monte Carlo Shapley value estimation. Model-agnostic explainability across language models, vision-language models, video understanding, and autonomous agents.
☆72Updated 3 weeks ago
Alternatives and similar repositories for TokenSHAP
Users that are interested in TokenSHAP are comparing it to the libraries listed below
Sorting:
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆174Updated this week
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning☆46Updated 2 years ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆319Updated last year
- ☆112Updated 11 months ago
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆294Updated 11 months ago
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- ☆80Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆224Updated last month
- awesome synthetic (text) datasets☆321Updated last month
- code for training & evaluating Contextual Document Embedding models☆202Updated 8 months ago
- PyTorch library for Active Fine-Tuning☆96Updated 4 months ago
- ☆255Updated this week
- Stanford NLP Python library for understanding and improving PyTorch models via interventions☆857Updated last week
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆198Updated 11 months ago
- This is the reproduction repository for my 🤗 Hugging Face blog post on synthetic data☆68Updated last year
- A mechanistic approach for understanding and detecting factual errors of large language models.☆49Updated last year
- ☆327Updated last year
- EvalAssist is an open-source project that simplifies using large language models as evaluators (LLM-as-a-Judge) of the output of other la…☆93Updated 2 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆163Updated 7 months ago
- Flexible library for merging large language models (LLMs) via evolutionary optimization (ACL 2025 Demo).☆98Updated 6 months ago
- WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.☆62Updated last month
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆176Updated 2 weeks ago
- 🦄 Unitxt is a Python library for enterprise-grade evaluation of AI performance, offering the world's largest catalog of tools and data …☆212Updated 2 weeks ago
- Notebooks for training universal 0-shot classifiers on many different tasks☆139Updated last year
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- ☆152Updated 5 months ago
- Steering vectors for transformer language models in Pytorch / Huggingface☆140Updated 11 months ago
- ☆60Updated 4 months ago
- Code for the EMNLP 2024 paper "Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps"☆142Updated 3 months ago
- Evaluating LLMs with fewer examples☆169Updated last year