pillowsofwind / Course-Correction
[EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"
☆19Updated 5 months ago
Alternatives and similar repositories for Course-Correction:
Users that are interested in Course-Correction are comparing it to the libraries listed below
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated 8 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆89Updated 9 months ago
- ☆72Updated 9 months ago
- ☆37Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆107Updated 5 months ago
- ☆21Updated 8 months ago
- ☆30Updated 5 months ago
- Code for our paper "Defending ChatGPT against Jailbreak Attack via Self-Reminder" in NMI.☆47Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆66Updated last month
- [ICLR'25 Spotlight] Min-K%++: Improved baseline for detecting pre-training data of LLMs☆36Updated last month
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆107Updated 11 months ago
- ☆25Updated 5 months ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆54Updated 11 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆66Updated 2 years ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆70Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆31Updated 7 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- Weak-to-Strong Jailbreaking on Large Language Models☆72Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆72Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆46Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆106Updated 6 months ago
- ☆30Updated 10 months ago
- [ICML 2024] Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications☆71Updated 2 weeks ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆23Updated 8 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆47Updated last year
- ☆41Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆81Updated 8 months ago
- ☆53Updated 2 months ago
- ☆25Updated 9 months ago