apartresearch / specificityplus
π©βπ» Code for the ACL paper "Detecting Edit Failures in LLMs: An Improved Specificity Benchmark"
β20Updated last year
Alternatives and similar repositories for specificityplus:
Users that are interested in specificityplus are comparing it to the libraries listed below
- This repository includes code for the paper "Does Localization Inform Editing? Surprising Differences in Where Knowledge Is Stored vs. Caβ¦β59Updated last year
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/β21Updated 3 weeks ago
- β44Updated 6 months ago
- β44Updated last year
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975β37Updated last year
- Inspecting and Editing Knowledge Representations in Language Modelsβ114Updated last year
- This repository contains the dataset and code for "WiCE: Real-World Entailment for Claims in Wikipedia" in EMNLP 2023.β41Updated last year
- Interpreting Language Models with Contrastive Explanations (EMNLP 2022 Best Paper Honorable Mention)β62Updated 2 years ago
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant seβ¦β58Updated 2 years ago
- Supporting code for ReCEval paperβ28Updated 6 months ago
- β82Updated 7 months ago
- CausalGym: Benchmarking causal interpretability methods on linguistic tasksβ41Updated 4 months ago
- Code for preprint: Summarizing Differences between Text Distributions with Natural Languageβ42Updated 2 years ago
- β47Updated last year
- Code for the arXiv paper: "LLMs as Factual Reasoners: Insights from Existing Benchmarks and Beyond"β59Updated 2 months ago
- Code for "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Modβ¦β35Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"β37Updated 2 years ago
- β46Updated last year
- This repository accompanies our paper βDo Prompt-Based Models Really Understand the Meaning of Their Prompts?ββ85Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)β58Updated last year
- WikiWhy is a new benchmark for evaluating LLMs' ability to explain between cause-effect relationships. It is a QA dataset containing 9000β¦β47Updated last year
- datasets from the paper "Towards Understanding Sycophancy in Language Models"β73Updated last year
- The accompanying code for "Transformer Feed-Forward Layers Are Key-Value Memories". Mor Geva, Roei Schuster, Jonathan Berant, and Omer Leβ¦β90Updated 3 years ago
- This is the official implementation for our ACL 2024 paper: "Causal Estimation of Memorisation Profiles".β21Updated last week
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paperβ77Updated 4 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"β77Updated last year
- β20Updated 2 years ago
- β38Updated last year
- β58Updated 2 years ago
- Exploring the Limitations of Large Language Models on Multi-Hop Queriesβ24Updated 3 weeks ago