jiah-li / magicLinks
The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.
☆11Updated 9 months ago
Alternatives and similar repositories for magic
Users that are interested in magic are comparing it to the libraries listed below
Sorting:
- ☆19Updated 3 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆43Updated 3 weeks ago
- ☆17Updated 7 months ago
- ☆21Updated 6 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆31Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆38Updated 2 weeks ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆22Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆64Updated 8 months ago
- ☆41Updated 11 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆85Updated 4 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆60Updated 11 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆21Updated 4 months ago
- ☆62Updated 5 months ago
- ☆23Updated 8 months ago
- ☆23Updated 9 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆36Updated 4 months ago
- [ICLR 2024]Data for "Multilingual Jailbreak Challenges in Large Language Models"☆83Updated last year
- ☆49Updated 7 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆72Updated 7 months ago
- ☆30Updated last year
- ☆27Updated last year
- [FCS'24] LVLM Safety paper☆18Updated 8 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆83Updated last year
- ☆18Updated 6 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Panda Guard is designed for researching jailbreak attacks, defenses, and evaluation algorithms for large language models (LLMs).☆48Updated last month
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 11 months ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆81Updated last year
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆29Updated 7 months ago