WangCheng0116 / Awesome-LRMs-SafetyView external linksLinks
Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning Models to enhance their security and reliability.
☆87Aug 25, 2025Updated 5 months ago
Alternatives and similar repositories for Awesome-LRMs-Safety
Users that are interested in Awesome-LRMs-Safety are comparing it to the libraries listed below
Sorting:
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆14Apr 8, 2025Updated 10 months ago
- This is the official code for the paper "Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturba…☆36Mar 22, 2025Updated 10 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Updated this week
- Code for paper "Concrete Subspace Learning based Interference Elimination for Multi-task Model Fusion"☆14Mar 28, 2024Updated last year
- ☆14Feb 26, 2025Updated 11 months ago
- Code and data for PAN and PAN-phys.☆13Mar 20, 2023Updated 2 years ago
- [AAAI'26 Oral] Official Implementation of STAR-1: Safer Alignment of Reasoning LLMs with 1K Data☆33Apr 7, 2025Updated 10 months ago
- ☆16Feb 8, 2024Updated 2 years ago
- A survey on harmful fine-tuning attack for large language model☆232Jan 9, 2026Updated last month
- ☆19Jun 21, 2025Updated 7 months ago
- [NeurIPS 2024] "Collaboration! Towards Robust Neural Methods for Routing Problems"☆21Nov 16, 2024Updated last year
- ☆27Dec 9, 2024Updated last year
- ☆24Aug 7, 2025Updated 6 months ago
- This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large Language Models" (NeurIPS2024)☆49Jan 15, 2026Updated last month
- The official repository for guided jailbreak benchmark☆28Jul 28, 2025Updated 6 months ago
- multi-task learning for routing problem☆22Dec 2, 2025Updated 2 months ago
- [ICLR 2026] Neural Combinatorial Optimization for Real-World Routing☆24Feb 10, 2026Updated last week
- This is the official code for the paper "Lazy Safety Alignment for Large Language Models against Harmful Fine-tuning" (NeurIPS2024)☆25Sep 10, 2024Updated last year
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆174Jun 27, 2025Updated 7 months ago
- [NeurIPS 2025] PARCO: Parallel AutoRegressive Combinatorial Optimization☆38Dec 3, 2025Updated 2 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆34Oct 23, 2024Updated last year
- Official repository for the TMLR paper "Self-Improvement for Neural Combinatorial Optimization: Sample Without Replacement, but Improveme…☆29Jan 22, 2026Updated 3 weeks ago
- [NeurIPS 2023] T2T: From Distribution Learning in Training to Gradient Search in Testing for Combinatorial Optimization☆70Jul 2, 2025Updated 7 months ago
- [ECCV 2024] The official code for "AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shi…☆70Feb 9, 2026Updated last week
- Official implementation of IJCAI'24 paper "Towards Generalizable Neural Solvers for Vehicle Routing Problems via Ensemble with Transferra…☆24May 15, 2024Updated last year
- AISafetyLab: A comprehensive framework covering safety attack, defense, evaluation and paper list.☆228Aug 29, 2025Updated 5 months ago
- ☆28Nov 5, 2023Updated 2 years ago
- Official code of paper "Beyond 'Aha!': Toward Systematic Meta-Abilities Alignment in Large Reasoning Models"☆86May 27, 2025Updated 8 months ago
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆227Feb 3, 2026Updated 2 weeks ago
- [AAAI 2026] Data and Code for Paper IS-Bench: Evaluating Interactive Safety of VLM-Driven Embodied Agents in Daily Household Tasks☆40Nov 24, 2025Updated 2 months ago
- ☆35Sep 13, 2023Updated 2 years ago
- [COLM 2025] SEAL: Steerable Reasoning Calibration of Large Language Models for Free☆52Apr 6, 2025Updated 10 months ago
- A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide…☆1,771Feb 1, 2026Updated 2 weeks ago
- A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).☆1,860Jan 24, 2026Updated 3 weeks ago
- [ICML'24 Oral] Rethinking Post-Hoc Search-Based Neural Approaches for Solving Large-Scale Traveling Salesman Problems☆41Apr 6, 2025Updated 10 months ago
- [NeurIPS 2024] Learning to Handle Complex Constraints for Vehicle Routing Problems☆40Apr 2, 2025Updated 10 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆84Jan 19, 2025Updated last year
- The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster …☆59Sep 11, 2025Updated 5 months ago
- Source code for the paper "Memory-Efficient Fine-Tuning via Low-Rank Activation Compression"☆13Aug 1, 2025Updated 6 months ago