swj0419 / detect-pretrain-codeLinks
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
☆231Updated last year
Alternatives and similar repositories for detect-pretrain-code
Users that are interested in detect-pretrain-code are comparing it to the libraries listed below
Sorting:
- The Paper List on Data Contamination for Large Language Models Evaluation.☆99Updated last week
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆154Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆129Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆185Updated last year
- Generative Judge for Evaluating Alignment☆245Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆115Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆146Updated 10 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- A Survey of Hallucination in Large Foundation Models☆54Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆158Updated 5 months ago
- A Survey on Data Selection for Language Models☆247Updated 3 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 3 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆107Updated last month
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 9 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆507Updated 7 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆170Updated 3 months ago
- ☆39Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆85Updated 3 months ago
- Do Large Language Models Know What They Don’t Know?☆99Updated 9 months ago
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆127Updated last year
- ☆47Updated last year
- Self-Alignment with Principle-Following Reward Models☆165Updated 3 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆56Updated last year
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆75Updated 3 months ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆86Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 11 months ago
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆42Updated 3 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- ☆280Updated 7 months ago