swj0419 / detect-pretrain-code
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
☆215Updated last year
Alternatives and similar repositories for detect-pretrain-code:
Users that are interested in detect-pretrain-code are comparing it to the libraries listed below
- Generative Judge for Evaluating Alignment☆223Updated last year
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning"☆97Updated 6 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆118Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆145Updated 8 months ago
- LLM Unlearning☆141Updated last year
- A Survey on Data Selection for Language Models☆201Updated 3 months ago
- RewardBench: the first evaluation tool for reward models.☆491Updated last week
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆85Updated 5 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆136Updated 6 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆106Updated 6 months ago
- Self-Alignment with Principle-Following Reward Models☆150Updated 10 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆71Updated 7 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆120Updated 6 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆116Updated last month
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆452Updated 8 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆154Updated 6 months ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆86Updated last week
- ☆119Updated last month
- Benchmarking LLMs with Challenging Tasks from Real Users☆206Updated 2 months ago
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆129Updated 2 months ago
- ☆251Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆154Updated last month
- Curation of resources for LLM mathematical reasoning, most of which are screened by @tongyx361 to ensure high quality and accompanied wit…☆102Updated 6 months ago
- Röttger et al. (2023): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆77Updated last year
- A simple GPT-based evaluation tool for multi-aspect, interpretable assessment of LLMs.☆79Updated 11 months ago
- ☆158Updated last year
- ☆164Updated last week
- Code and example data for the paper: Rule Based Rewards for Language Model Safety☆174Updated 5 months ago
- ACL 2024 | LooGLE: Long Context Evaluation for Long-Context Language Models☆171Updated 3 months ago
- A curated list of LLM Interpretability related material - Tutorial, Library, Survey, Paper, Blog, etc..☆197Updated 3 months ago