ntunlp / LLMSanitizeLinks
An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).
☆57Updated 9 months ago
Alternatives and similar repositories for LLMSanitize
Users that are interested in LLMSanitize are comparing it to the libraries listed below
Sorting:
- The Paper List on Data Contamination for Large Language Models Evaluation.☆94Updated 2 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- ☆74Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆76Updated last year
- The LM Contamination Index is a manually created database of contamination evidences for LMs.☆78Updated last year
- Grade-School Math with Irrelevant Context (GSM-IC) benchmark is an arithmetic reasoning dataset built upon GSM8K, by adding irrelevant se…☆60Updated 2 years ago
- ☆44Updated 9 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆71Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆111Updated 10 months ago
- Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/☆24Updated 2 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆64Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆68Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆111Updated 8 months ago
- ☆85Updated 2 years ago
- ☆75Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 10 months ago
- Data and code for the preprint "In-Context Learning with Long-Context Models: An In-Depth Exploration"☆35Updated 9 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆78Updated 4 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆110Updated last year
- ☆37Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆33Updated 9 months ago
- ☆86Updated 2 years ago
- ☆18Updated last year
- [ICLR'25] Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"☆75Updated 6 months ago
- ☆50Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated 2 weeks ago