jiah-li / magicLinks
The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.
☆13Updated last year
Alternatives and similar repositories for magic
Users that are interested in magic are comparing it to the libraries listed below
Sorting:
- ☆23Updated 7 months ago
- ☆23Updated last year
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆59Updated 4 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Updated 4 months ago
- ☆44Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24Updated 8 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- ☆14Updated 11 months ago
- ☆24Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆23Updated last year
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆86Updated 2 years ago
- ☆21Updated 10 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆50Updated last year
- [NeurIPS 2024 D&B] Evaluating Copyright Takedown Methods for Language Models☆17Updated last year
- ☆21Updated 10 months ago
- EMNLP 2024: Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue☆38Updated 8 months ago
- An official implementation of "Catastrophic Failure of LLM Unlearning via Quantization" (ICLR 2025)☆36Updated 11 months ago
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆105Updated 8 months ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Updated last year
- Code for paper "Universal Jailbreak Backdoors from Poisoned Human Feedback"☆66Updated last year
- ☆27Updated 2 years ago
- [FCS'24] LVLM Safety paper☆19Updated last year
- ☆24Updated 11 months ago
- ☆25Updated 2 years ago
- ☆51Updated 11 months ago
- ☆72Updated last year
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆26Updated 7 months ago