swj0419 / detect-pretrain-codeLinks
This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu , Terra Blevins , Danqi Chen , Luke Zettlemoyer.
☆240Updated 2 years ago
Alternatives and similar repositories for detect-pretrain-code
Users that are interested in detect-pretrain-code are comparing it to the libraries listed below
Sorting:
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆175Updated 2 years ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆109Updated last week
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- A Survey on Data Selection for Language Models☆253Updated 9 months ago
- ☆41Updated 2 years ago
- Generative Judge for Evaluating Alignment☆250Updated 2 years ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆129Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆59Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆136Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆137Updated 9 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆201Updated 2 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆171Updated 11 months ago
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆535Updated last year
- A Survey of Hallucination in Large Foundation Models☆56Updated 2 years ago
- Data and Code for Program of Thoughts [TMLR 2023]☆303Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆136Updated 6 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- Do Large Language Models Know What They Don’t Know?☆102Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆119Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆79Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Updated last year
- Project for the paper entitled `Instruction Tuning for Large Language Models: A Survey`☆227Updated 5 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆173Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆148Updated last year
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆119Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆151Updated last year
- Official Repo for ICLR 2024 paper MINT: Evaluating LLMs in Multi-turn Interaction with Tools and Language Feedback by Xingyao Wang*, Ziha…☆132Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆168Updated 2 years ago