Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning Models to enhance their security and reliability.
☆88Aug 25, 2025Updated 6 months ago
Alternatives and similar repositories for Awesome-LRMs-Safety
Users that are interested in Awesome-LRMs-Safety are comparing it to the libraries listed below
Sorting:
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated 11 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Mar 22, 2025Updated 11 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- ☆14Feb 26, 2025Updated last year
- Code for paper "Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion"☆14Mar 28, 2024Updated last year
- Code and data for PAN and PAN-phys.☆13Mar 20, 2023Updated 2 years ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated 11 months ago
- Demo code for the paper: One Thing to Fool them All: Generating Interpretable, Universal, and Physically-Realizable Adversarial Features☆12Nov 30, 2023Updated 2 years ago
- A survey on harmful fine-tuning attack for large language model☆233Feb 25, 2026Updated last week
- ☆19Jun 21, 2025Updated 8 months ago
- [AAMAS 2025 Oral] CAMP: Collaborative Attention Model with Profiles for Vehicle Routing Problems☆30Dec 3, 2025Updated 3 months ago
- [NeurIPS 2024] "Collaboration! Towards Robust Neural Methods for Routing Problems"☆21Nov 16, 2024Updated last year
- ☆24Feb 17, 2026Updated 3 weeks ago
- ☆27Dec 9, 2024Updated last year
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated last month
- [ICLR24] Official Repo of BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models☆49Jul 24, 2024Updated last year
- [ICLR 2026] Neural Combinatorial Optimization for Real-World Routing☆25Feb 27, 2026Updated last week
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆26Sep 10, 2024Updated last year
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆174Jun 27, 2025Updated 8 months ago
- ☆107Aug 11, 2025Updated 6 months ago
- Code for paper "Merging Multi-Task Models via Weight-Ensembling Mixture of Experts"☆31Jun 7, 2024Updated last year
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆72Feb 9, 2026Updated last month
- ☆28Nov 5, 2023Updated 2 years ago
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆232Aug 29, 2025Updated 6 months ago
- Official code of paper "Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models"☆87May 27, 2025Updated 9 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆228Feb 3, 2026Updated last month
- [AAAI 2026] Data and Code for Paper IS-Bench: Evaluating Interactive Safety of VLM-Driven Embodied Agents in Daily Household Tasks☆41Nov 24, 2025Updated 3 months ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆54Apr 6, 2025Updated 11 months ago
- ☆35Sep 13, 2023Updated 2 years ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,789Updated this week
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,879Updated this week
- [ICML'24 Oral] Rethinking Post-Hoc Search-Based Neural Approaches for Solving Large-Scale Traveling Salesman Problems☆41Apr 6, 2025Updated 11 months ago
- [NeurIPS 2024] Learning to Handle Complex Constraints for Vehicle Routing Problems☆41Feb 17, 2026Updated 3 weeks ago
- Rad-cGAN v1.0: Radar-based precipitation nowcasting model with conditional Generative Adversarial Networks for multiple dam domains☆11Jul 22, 2022Updated 3 years ago
- CoT-Valve: Length-Compressible Chain-of-Thought Tuning☆92Feb 14, 2025Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- Source code for the paper "Memory-Efficient Fine-Tuning via Low-Rank Activation Compression"☆13Aug 1, 2025Updated 7 months ago