tatsu-lab / test_set_contaminationLinks
☆39Updated last year
Alternatives and similar repositories for test_set_contamination
Users that are interested in test_set_contamination are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 11 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆77Updated 2 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆79Updated 8 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆42Updated 3 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆129Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆60Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆99Updated last week
- PASTA: Post-hoc Attention Steering for LLMs☆122Updated 9 months ago
- This is the repo for our paper "Mr-Ben: A Comprehensive Meta-Reasoning Benchmark for Large Language Models"☆50Updated 9 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆40Updated 7 months ago
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 11 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆74Updated 9 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆141Updated 11 months ago
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆123Updated last month
- Restore safety in fine-tuned language models through task arithmetic☆28Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Updated 2 years ago
- Function Vectors in Large Language Models (ICLR 2024)☆177Updated 4 months ago
- Official repository for ICLR 2024 Spotlight paper "Large Language Models Are Not Robust Multiple Choice Selectors"☆41Updated 3 months ago
- ☆96Updated last year
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆185Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆76Updated 5 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆107Updated 6 months ago