zjysteven / mink-plus-plus
[ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs
☆35Updated last week
Alternatives and similar repositories for mink-plus-plus:
Users that are interested in mink-plus-plus are comparing it to the libraries listed below
- ☆37Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 8 months ago
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆62Updated 3 months ago
- Official Repository for Dataset Inference for LLMs☆32Updated 6 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆33Updated last month
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 3 months ago
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 7 months ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆65Updated 11 months ago
- Directional Preference Alignment☆56Updated 4 months ago
- ☆41Updated last week
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆71Updated 2 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆84Updated last week
- [NeurIPS 2023 D&B Track] Code and data for paper "Revisiting Out-of-distribution Robustness in NLP: Benchmarks, Analysis, and LLMs Evalua…☆31Updated last year
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- ☆25Updated 5 months ago
- ☆47Updated 10 months ago
- Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning [ICML 2024]☆17Updated 9 months ago
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆23Updated 3 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆104Updated 10 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated 11 months ago
- ☆30Updated 9 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆53Updated 10 months ago
- ☆44Updated 5 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆46Updated 11 months ago
- ☆85Updated 2 years ago
- ☆30Updated 2 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆70Updated 4 months ago
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆60Updated last year
- ☆30Updated 4 months ago
- Restore safety in fine-tuned language models through task arithmetic☆27Updated 10 months ago