tatsu-lab / test_set_contaminationLinks
☆38Updated last year
Alternatives and similar repositories for test_set_contamination
Users that are interested in test_set_contamination are comparing it to the libraries listed below
Sorting:
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆66Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆77Updated 6 months ago
- Implementation for the paper "The Surprising Effectiveness of Negative Reinforcement in LLM Reasoning"☆74Updated this week
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆75Updated 2 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆71Updated 8 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆138Updated 9 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- The Paper List on Data Contamination for Large Language Models Evaluation.☆95Updated 3 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆123Updated 10 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆63Updated 7 months ago
- ☆59Updated 10 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆121Updated 7 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆74Updated last month
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆60Updated 7 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆78Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆60Updated 7 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆62Updated 7 months ago
- ☆52Updated last year
- ☆29Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 11 months ago