The Paper List on Data Contamination for Large Language Models Evaluation.
☆110Jan 29, 2026Updated last month
Alternatives and similar repositories for awesome-data-contamination
Users that are interested in awesome-data-contamination are comparing it to the libraries listed below
Sorting:
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆60Aug 13, 2024Updated last year
- ☆16Nov 26, 2024Updated last year
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆82Apr 11, 2024Updated last year
- A framework for benchmarking embedding models in hybrid search scenarios (BM25 + vector search) using Weaviate.☆38Updated this week
- ☆23Dec 18, 2024Updated last year
- ☆19Oct 24, 2023Updated 2 years ago
- [ACL 2025] Official code for ''Learning to Reason from Feedback at Test-Time''.☆13May 16, 2025Updated 9 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆76Jan 16, 2026Updated last month
- DICE: Detecting In-distribution Data Contamination with LLM's Internal State☆11Sep 21, 2024Updated last year
- Code and data for NAACL 2025 paper "IHEval: Evaluating Language Models on Following the Instruction Hierarchy"☆17Feb 25, 2025Updated last year
- SVIP: Towards Verifiable Inference of Open-Source Large Language Models☆14Jun 3, 2025Updated 9 months ago
- Investigating the generalization behavior of LM probes trained to predict truth labels: (1) from one annotator to another, and (2) from e…☆28May 23, 2024Updated last year
- Xlore2.0 Code[BaiduExtractor, HudongExtractor, WikiExtractor, XloreData, XloreWeb]☆12Apr 5, 2017Updated 8 years ago
- Code for LLM_Catastrophic_Forgetting via SAM.☆11Jun 7, 2024Updated last year
- [ICCV 2025] "Fine-grained Spatiotemporal Grounding on Egocentric Videos"☆23Nov 23, 2025Updated 3 months ago
- Latest Evaluation Toolkit (LatestEval). Assessing the language models with latest, uncontaminated materials.☆28Feb 17, 2025Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆242Nov 3, 2023Updated 2 years ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Aug 15, 2024Updated last year
- Code for the paper "Rethinking Benchmark and Contamination for Language Models with Rephrased Samples"☆316Dec 20, 2023Updated 2 years ago
- ☆36May 9, 2025Updated 10 months ago
- Codes and data for EMNLP 2021 paper "Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Re…☆16Oct 15, 2022Updated 3 years ago
- Paper list for the paper "Authorship Attribution in the Era of Large Language Models: Problems, Methodologies, and Challenges (SIGKDD Exp…☆18Dec 23, 2024Updated last year
- Efficient Finetuning for OpenAI GPT-OSS☆23Oct 2, 2025Updated 5 months ago
- [ICLR 2025] Is Your Model Really A Good Math Reasoner? Evaluating Mathematical Reasoning with Checklist☆35Oct 23, 2024Updated last year
- [ICLR 2025] Permute-and-Flip: An optimally robust and watermarkable decoder for LLMs☆19Mar 20, 2025Updated 11 months ago
- A bibliography and survey of the papers surrounding o1☆1,212Nov 16, 2024Updated last year
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Jun 3, 2024Updated last year
- Official code and dataset repository of KoBBQ (TACL 2024)☆19May 13, 2024Updated last year
- ☆20Nov 4, 2025Updated 4 months ago
- The implementation of the paper "Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters".☆17May 24, 2022Updated 3 years ago
- Evaluate gpt-4o on CLIcK (Korean NLP Dataset)☆20May 18, 2024Updated last year
- Benchmarking MIAs against LLMs.☆28Oct 8, 2024Updated last year
- Replicating O1 inference-time scaling laws☆93Dec 1, 2024Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆97Nov 17, 2024Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Mar 8, 2024Updated 2 years ago
- Holistic Evaluation of Language Models (HELM) is an open source Python framework created by the Center for Research on Foundation Models …☆2,702Updated this week
- StrategyQA 데이터 세트 번역☆23Apr 12, 2024Updated last year
- Benchmarking Optimizers for LLM Pretraining☆54Dec 30, 2025Updated 2 months ago
- ☆23Jul 5, 2024Updated last year