AI4LIFE-GROUP / LLM_ExplainerLinks
Code for paper: Are Large Language Models Post Hoc Explainers?
☆31Updated 10 months ago
Alternatives and similar repositories for LLM_Explainer
Users that are interested in LLM_Explainer are comparing it to the libraries listed below
Sorting:
- ☆27Updated last year
- Conformal Language Modeling☆29Updated last year
- ☆29Updated last year
- The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.☆21Updated 2 years ago
- ☆26Updated 6 months ago
- A simple PyTorch implementation of influence functions.☆88Updated 11 months ago
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆78Updated last year
- Using Explanations as a Tool for Advanced LLMs☆62Updated 8 months ago
- ☆89Updated 11 months ago
- Code for NeurIPS'23 paper "A Bayesian Approach To Analysing Training Data Attribution In Deep Learning"☆17Updated last year
- ☆44Updated 3 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆33Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆67Updated 8 months ago
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)☆98Updated 4 months ago
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆76Updated 8 months ago
- Interpretable and efficient predictors using pre-trained language models. Scikit-learn compatible.☆42Updated 3 months ago
- Uncertainty quantification for in-context learning of large language models☆16Updated last year
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.☆126Updated 6 months ago
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆44Updated last month
- Official Repository for ICML 2023 paper "Can Neural Network Memorization Be Localized?"☆18Updated last year
- Data-OOB: Out-of-bag Estimate as a Simple and Efficient Data Value (ICML 2023)☆18Updated last year
- Data and code for the Corr2Cause paper (ICLR 2024)☆105Updated last year
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆42Updated last year
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆119Updated last year
- `dattri` is a PyTorch library for developing, benchmarking, and deploying efficient data attribution algorithms.☆75Updated last month
- Uncertainty Quantification with Pre-trained Language Models: An Empirical Analysis☆15Updated 2 years ago
- XAI-Bench is a library for benchmarking feature attribution explainability techniques☆66Updated 2 years ago
- ☆36Updated 2 months ago
- Official Repository for Dataset Inference for LLMs☆34Updated 10 months ago
- ☆66Updated 2 years ago