To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models
☆33May 21, 2025Updated 9 months ago
Alternatives and similar repositories for unthinking_vulnerability
Users that are interested in unthinking_vulnerability are comparing it to the libraries listed below
Sorting:
- Benchmarking Physical Risk Awareness of Foundation Model-based Embodied AI Agents☆23Nov 28, 2024Updated last year
- This is the official implementation of ICLR 2024 paper "VDC: Versatile Data Cleanser based on Visual-Linguistic Inconsistency by Multimod…☆19Feb 24, 2025Updated last year
- ☆30Oct 22, 2025Updated 4 months ago
- ☆26Jun 27, 2024Updated last year
- Pytorch implementation of NPAttack☆12Jul 7, 2020Updated 5 years ago
- [COLING 2025] Official repo of paper: "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models' Jail…☆12Jul 26, 2024Updated last year
- ☆14Apr 14, 2025Updated 11 months ago
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆30Oct 2, 2025Updated 5 months ago
- ☆11Apr 27, 2022Updated 3 years ago
- The open-source materials for paper "Sparsing Law: Towards Large Language Models with Greater Activation Sparsity".☆30Nov 12, 2024Updated last year
- ☆14Feb 26, 2025Updated last year
- Can Large Language Models Identify Authorship? (EMNLP 2024 Findings)☆12Feb 4, 2025Updated last year
- ☆49Apr 4, 2025Updated 11 months ago
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- ☆89Mar 20, 2025Updated last year
- DICE: Detecting In-distribution Data Contamination with LLM's Internal State☆11Sep 21, 2024Updated last year
- [NeurIPS 2025] Reasoning Models Better Express Their Confidence"☆22Nov 19, 2025Updated 4 months ago
- ☆21Mar 17, 2025Updated last year
- ☆33Oct 13, 2025Updated 5 months ago
- ☆19May 14, 2025Updated 10 months ago
- [NeurIPS 2024 D&B] DetectRL: Benchmarking LLM-Generated Text Detection in Real-World Scenarios☆14Nov 19, 2024Updated last year
- ☆49Feb 25, 2026Updated 3 weeks ago
- [ICML 2025] Speak Easy: Eliciting Harmful Jailbreaks from LLMs with Simple Interactions☆14Mar 7, 2026Updated last week
- ☆40Oct 12, 2025Updated 5 months ago
- ☆36May 21, 2025Updated 9 months ago
- Code for paper: PoisonPrompt: Backdoor Attack on Prompt-based Large Language Models, IEEE ICASSP 2024. Demo//124.220.228.133:11107☆20Aug 10, 2024Updated last year
- Official repository of paper "Context-DPO: Aligning Language Models for Context-Faithfulness"☆21Feb 17, 2025Updated last year
- ☆40May 17, 2025Updated 10 months ago
- 面向大模型的民族文化数据集☆12May 26, 2025Updated 9 months ago
- 🔥🔥🔥 Detecting hidden backdoors in Large Language Models with only black-box access☆53Jun 2, 2025Updated 9 months ago
- Code repository for the paper on "Predicting the Performance of Black-Box LLMs through Self-Queries".☆12Jan 9, 2025Updated last year
- ☆29Mar 3, 2021Updated 5 years ago
- ☆25Nov 19, 2025Updated 4 months ago
- Official Implementation of NeurIPS 2024 paper - BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens☆28Feb 17, 2026Updated last month
- ☆29Feb 27, 2025Updated last year
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- [NeurIPS 2024] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling☆34Nov 8, 2024Updated last year
- [ACL 2025] The official code for "AGrail: A Lifelong Agent Guardrail with Effective and Adaptive Safety Detection".☆37Aug 4, 2025Updated 7 months ago
- Implementation of AdaCQR(COLING 2025)☆13Dec 30, 2024Updated last year