DYR1 / MoGULinks
Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.
☆15Updated 6 months ago
Alternatives and similar repositories for MoGU
Users that are interested in MoGU are comparing it to the libraries listed below
Sorting:
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆94Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆85Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated 2 months ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆62Updated 6 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 9 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆80Updated 2 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆66Updated last year
- LLM Unlearning☆171Updated last year
- ☆38Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆92Updated last month
- ☆175Updated last year
- Code for "Universal Adversarial Triggers Are Not Universal."☆17Updated last year
- ☆41Updated 9 months ago
- The code of “Improving Weak-to-Strong Generalization with Scalable Oversight and Ensemble Learning”☆17Updated last year
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆62Updated 7 months ago
- ☆21Updated 4 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆154Updated 4 months ago
- Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models☆29Updated last year
- ConceptVectors Benchmark and Code for the paper "Intrinsic Evaluation of Unlearning Using Parametric Knowledge Traces"☆36Updated 5 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆149Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆124Updated 8 months ago
- ☆30Updated last year
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆77Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 9 months ago
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆114Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆102Updated last year
- Code for paper 'Batch-ICL: Effective, Efficient, and Order-Agnostic In-Context Learning'☆16Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆137Updated 11 months ago