AI4LIFE-GROUP / LLM_ExplainerLinks
Code for paper: Are Large Language Models Post Hoc Explainers?
☆33Updated last year
Alternatives and similar repositories for LLM_Explainer
Users that are interested in LLM_Explainer are comparing it to the libraries listed below
Sorting:
- OpenDataVal: a Unified Benchmark for Data Valuation in Python (NeurIPS 2023)☆99Updated 6 months ago
- [EMNLP 2023] Poisoning Retrieval Corpora by Injecting Adversarial Passages https://arxiv.org/abs/2310.19156☆36Updated last year
- DataInf: Efficiently Estimating Data Influence in LoRA-tuned LLMs and Diffusion Models (ICLR 2024)☆74Updated 10 months ago
- A repository for summaries of recent explainable AI/Interpretable ML approaches☆81Updated 10 months ago
- ☆99Updated last year
- Influence Analysis and Estimation - Survey, Papers, and Taxonomy☆82Updated last year
- Conformal Language Modeling☆32Updated last year
- ☆144Updated last year
- ☆34Updated last year
- ☆20Updated last year
- The TABLET benchmark for evaluating instruction learning with LLMs for tabular prediction.☆21Updated 2 years ago
- code repo for ICLR 2024 paper "Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs"☆130Updated last year
- Code for Language-Interfaced FineTuning for Non-Language Machine Learning Tasks.☆130Updated 9 months ago
- [ICML 2024 Spotlight] Differentially Private Synthetic Data via Foundation Model APIs 2: Text☆42Updated 7 months ago
- ☆44Updated 6 months ago
- ☆32Updated last year
- ☆55Updated 2 years ago
- Using Explanations as a Tool for Advanced LLMs☆67Updated 11 months ago
- A fast, effective data attribution method for neural networks in PyTorch☆217Updated 9 months ago
- ☆13Updated 2 years ago
- A reproduced PyTorch implementation of the Adversarially Reweighted Learning (ARL) model, originally presented in "Fairness without Demog…☆20Updated 4 years ago
- 🤫 Code and benchmark for our ICLR 2024 spotlight paper: "Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Con…☆44Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆76Updated 5 months ago
- A resource repository for representation engineering in large language models☆131Updated 9 months ago
- ☆172Updated last year
- ☆38Updated last year
- Official Repository for ICML 2023 paper "Can Neural Network Memorization Be Localized?"☆19Updated last year
- 💱 A curated list of data valuation (DV) to design your next data marketplace☆125Updated 6 months ago
- [EMNLP 2024] "Revisiting Who's Harry Potter: Towards Targeted Unlearning from a Causal Intervention Perspective"☆27Updated last year
- The dataset and code for the ICLR 2024 paper "Can LLM-Generated Misinformation Be Detected?"☆74Updated 9 months ago