wonderNefelibata / Awesome-LRM-SafetyView external linksLinks
Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as DeepSeek-R1 and OpenAI o1, which are currently very popular.
☆82Updated this week
Alternatives and similar repositories for Awesome-LRM-Safety
Users that are interested in Awesome-LRM-Safety are comparing it to the libraries listed below
Sorting:
- ☆14Feb 26, 2025Updated 11 months ago
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 8 months ago
- Backdoor Safety Tuning (NeurIPS 2023 & 2024 Spotlight)☆27Nov 18, 2024Updated last year
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆61Sep 5, 2025Updated 5 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated last month
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆50Dec 23, 2024Updated last year
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90May 19, 2024Updated last year
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆66Jun 9, 2025Updated 8 months ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆52Apr 6, 2025Updated 10 months ago
- Improving Alignment and Robustness with Circuit Breakers☆258Sep 24, 2024Updated last year
- Code for LLM_Catastrophic_Forgetting via SAM.☆11Jun 7, 2024Updated last year
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 3 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆87Aug 25, 2025Updated 5 months ago
- ☆24Dec 8, 2024Updated last year
- This is the repository that introduces research topics related to protecting intellectual property (IP) of AI from a data-centric perspec…☆23Oct 30, 2023Updated 2 years ago
- [EMNLP 2025] Reasoning-to-Defend: Safety-Aware Reasoning Can Defend Large Language Models from Jailbreaking☆12Aug 22, 2025Updated 5 months ago
- A survey on harmful fine-tuning attack for large language model☆232Jan 9, 2026Updated last month
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆227Feb 3, 2026Updated 2 weeks ago
- ☆47Sep 29, 2024Updated last year
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,860Jan 24, 2026Updated 3 weeks ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆18Jan 14, 2025Updated last year
- Code repository for the paper --- [USENIX Security 2023] Towards A Proactive ML Approach for Detecting Backdoor Poison Samples☆30Jul 11, 2023Updated 2 years ago
- TACL 2025: Investigating Adversarial Trigger Transfer in Large Language Models☆19Aug 17, 2025Updated 6 months ago
- This is the official Gtihub repo for our paper: "BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned Lang…☆21Jul 3, 2024Updated last year
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆58Oct 1, 2025Updated 4 months ago
- Code for Neurips 2024 paper "Shadowcast: Stealthy Data Poisoning Attacks Against Vision-Language Models"☆59Jan 15, 2025Updated last year
- Code for our ICLR 2023 paper Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples.☆18May 31, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- A resource repository for machine unlearning in large language models☆534Jan 6, 2026Updated last month
- Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation (NeurIPS 2022)☆33Dec 16, 2022Updated 3 years ago
- ☆12May 6, 2022Updated 3 years ago
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,771Feb 1, 2026Updated 2 weeks ago
- Accepted by IJCAI-24 Survey Track☆231Aug 25, 2024Updated last year
- [ICLR 2025] Official codebase for the ICLR 2025 paper "Multimodal Situational Safety"☆30Jun 23, 2025Updated 7 months ago
- [ArXiv 2025] Denial-of-Service Poisoning Attacks on Large Language Models☆23Oct 22, 2024Updated last year
- A new algorithm that formulates jailbreaking as a reasoning problem.☆26Jul 2, 2025Updated 7 months ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago