AI4LIFE-GROUP / LLM_ExplainerLinks
Code for paper: Are Large Language Models Post Hoc Explainers?
☆34Updated last year
Alternatives and similar repositories for LLM_Explainer
Users that are interested in LLM_Explainer are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆45Updated 2 years ago
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)☆99Updated 11 months ago
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆84Updated last year
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆88Updated last year
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.☆133Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆79Updated last year
- Conformal Language Modeling☆32Updated 2 years ago
- ☆104Updated last year
- ☆40Updated last year
- ☆24Updated last year
- Using Explanations as a Tool for Advanced LLMs☆68Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆85Updated 10 months ago
- Data and code for the Corr2Cause paper (ICLR 2024)☆111Updated last year
- Fairness in LLMs resources☆39Updated 2 months ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆140Updated last year
- This repo contains code for paper: "Uncertainty Estimation and Quantification for LLMs: A Simple Supervised Approach".☆24Updated last year
- ☆33Updated last year
- ☆38Updated 2 years ago
- ☆156Updated 2 years ago
- A fast, effective data attribution method for neural networks in PyTorch☆224Updated last year
- A resource repository for representation engineering in large language models☆145Updated last year
- ☆158Updated last year
- ☆28Updated last year
- A reproduced PyTorch implementation of the Adversarially Reweighted Learning (ARL) model, originally presented in "Fairness without Demog…☆20Updated 4 years ago
- ☆57Updated 2 years ago
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆50Updated 2 years ago
- [EMNLP 2024] "Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective"☆32Updated last year
- source code for NeurIPS'24 paper "HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection"☆64Updated 8 months ago
- The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.☆24Updated 2 years ago
- ☆182Updated last year