tatsu-lab / test_set_contaminationLinks
☆41Updated 2 years ago
Alternatives and similar repositories for test_set_contamination
Users that are interested in test_set_contamination are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆127Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated 2 years ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆62Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆134Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆108Updated last month
- Lightweight tool to identify Data Contamination in LLMs evaluation☆53Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- ☆52Updated 9 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated last year
- Official Code Repository for LM-Steer Paper: "Word Embeddings Are Steers for Language Models" (ACL 2024 Outstanding Paper Award)☆134Updated 5 months ago
- ☆103Updated 2 years ago
- AI Logging for Interpretability and Explainability🔬☆138Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆237Updated 2 years ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆68Updated last year
- ☆61Updated 7 months ago
- Official repository for MATES: Model-Aware Data Selection for Efficient Pretraining with Data Influence Models [NeurIPS 2024]☆77Updated last year
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆80Updated 2 years ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆76Updated 3 months ago
- PASTA: Post-hoc Attention Steering for LLMs☆132Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Easy-to-Hard Generalization: Scalable Alignment Beyond Human Supervision☆124Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆28Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆65Updated last year
- ☆29Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆31Updated last year
- GSM-Plus: Data, Code, and Evaluation for Enhancing Robust Mathematical Reasoning in Math Word Problems.☆64Updated last year
- LoFiT: Localized Fine-tuning on LLM Representations☆44Updated 11 months ago