PKU-YuanGroup / Reasoning-AttackLinks
☆135Updated 7 months ago
Alternatives and similar repositories for Reasoning-Attack
Users that are interested in Reasoning-Attack are comparing it to the libraries listed below
Sorting:
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆74Updated this week
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆73Updated 7 months ago
- ☆104Updated 8 months ago
- This is the code repository for "Uncovering Safety Risks of Large Language Models through Concept Activation Vector"☆44Updated 10 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆63Updated 8 months ago
- A toolbox for benchmarking trustworthiness of multimodal large language models (MultiTrust, NeurIPS 2024 Track Datasets and Benchmarks)☆166Updated 3 months ago
- ☆40Updated 6 months ago
- Accepted by ECCV 2024☆158Updated 11 months ago
- S-Eval: Towards Automated and Comprehensive Safety Evaluation for Large Language Models☆98Updated 3 months ago
- ☆22Updated 6 months ago
- Accepted by IJCAI-24 Survey Track☆216Updated last year
- Awesome Jailbreak, red teaming arxiv papers (Automatically Update Every 12th hours)☆64Updated this week
- ☆53Updated last year
- Safety at Scale: A Comprehensive Survey of Large Model Safety☆194Updated 7 months ago
- A survey on harmful fine-tuning attack for large language model☆212Updated this week
- [ACL 2024] CodeAttack: Revealing Safety Generalization Challenges of Large Language Models via Code Completion☆52Updated last week
- ☆149Updated last year
- [arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"☆163Updated last year
- ☆114Updated 5 months ago
- ☆44Updated 7 months ago
- ☆63Updated 2 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆161Updated 7 months ago
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- Official repository for "Safety in Large Reasoning Models: A Survey" - Exploring safety risks, attacks, and defenses for Large Reasoning …☆72Updated last month
- Attack to induce LLMs within hallucinations☆158Updated last year
- Official Repository for The Paper: Safety Alignment Should Be Made More Than Just a Few Tokens Deep☆154Updated 5 months ago
- Agent Security Bench (ASB)☆124Updated last week
- ☆46Updated 4 months ago
- AlphaEdit: Null-Space Constrained Knowledge Editing for Language Models, ICLR 2025 (Outstanding Paper)☆338Updated 3 months ago
- Official repository of RiOSWorld☆40Updated 2 weeks ago