swj0419 / detect-pretrain-codeLinks
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
☆223Updated last year
Alternatives and similar repositories for detect-pretrain-code
Users that are interested in detect-pretrain-code are comparing it to the libraries listed below
Sorting:
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆110Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆142Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 10 months ago
- LLM Unlearning☆162Updated last year
- Self-Alignment with Principle-Following Reward Models☆161Updated 3 weeks ago
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆85Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆175Updated 11 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆106Updated 3 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆97Updated 3 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆493Updated 4 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated 2 weeks ago
- Function Vectors in Large Language Models (ICLR 2024)☆167Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆223Updated 7 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆57Updated 9 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆84Updated last year
- Generative Judge for Evaluating Alignment☆238Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆111Updated 10 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆95Updated 2 months ago
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆178Updated 6 months ago
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆93Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆140Updated last month
- 【ACL 2024】 SALAD benchmark & MD-Judge☆147Updated 2 months ago
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆242Updated last year
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆184Updated 7 months ago
- Improving Alignment and Robustness with Circuit Breakers☆208Updated 8 months ago
- Official repository for ACL 2025 paper "ProcessBench: Identifying Process Errors in Mathematical Reasoning"☆155Updated 2 weeks ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆378Updated 10 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆97Updated 9 months ago
- A Survey on Data Selection for Language Models☆234Updated last month