zjysteven / mink-plus-plus
[ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs
☆37Updated 3 months ago
Alternatives and similar repositories for mink-plus-plus
Users that are interested in mink-plus-plus are comparing it to the libraries listed below
Sorting:
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆35Updated 6 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆93Updated 11 months ago
- LoFiT: Localized Fine-tuning on LLM Representations☆38Updated 4 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆58Updated 7 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆82Updated last year
- ☆42Updated 3 months ago
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆35Updated 3 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆76Updated last month
- ☆37Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆109Updated last year
- ☆29Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆73Updated last year
- ☆31Updated 2 months ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆97Updated 2 months ago
- ☆33Updated 4 months ago
- ☆26Updated 7 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆110Updated 8 months ago
- Mostly recording papers about models' trustworthy applications. Intending to include topics like model evaluation & analysis, security, c…☆21Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆72Updated 2 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 7 months ago
- Codebase for decoding compressed trust.☆23Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆57Updated last year
- ☆50Updated last year
- [NeurIPS 2023] Github repository for "Composing Parameter-Efficient Modules with Arithmetic Operations"☆61Updated last year
- ☆58Updated 10 months ago
- General-purpose activation steering library☆68Updated last week
- ☆21Updated 2 months ago
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆81Updated 8 months ago