akumar2709 / OVERTHINK_publicView external linksLinks
☆47Feb 4, 2026Updated last week
Alternatives and similar repositories for OVERTHINK_public
Users that are interested in OVERTHINK_public are comparing it to the libraries listed below
Sorting:
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆18Sep 1, 2025Updated 5 months ago
- A new algorithm that formulates jailbreaking as a reasoning problem.☆26Jul 2, 2025Updated 7 months ago
- The official code for ``An Engorgio Prompt Makes Large Language Model Babble on''☆21Aug 9, 2025Updated 6 months ago
- ☆39May 17, 2025Updated 8 months ago
- ☆12Jun 15, 2024Updated last year
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10May 16, 2022Updated 3 years ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆63Nov 10, 2025Updated 3 months ago
- ☆14Oct 6, 2024Updated last year
- ReasoningShield: Safety Detection over Reasoning Traces of Large Reasoning Models☆24Sep 27, 2025Updated 4 months ago
- ☆137Feb 28, 2025Updated 11 months ago
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated 11 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- Official code for ICML 2024 paper, "Connecting the Dots: Collaborative Fine-tuning for Black-Box Vision-Language Models"☆19Jun 12, 2024Updated last year
- ☆121Feb 3, 2025Updated last year
- The official repository for guided jailbreak benchmark☆28Jul 28, 2025Updated 6 months ago
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆71Aug 14, 2025Updated 5 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆490Jan 27, 2026Updated 2 weeks ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆33May 21, 2025Updated 8 months ago
- [AAAI 2025] More Text, Less Point: Towards 3D Data-Efficient Point-Language Understanding☆26May 27, 2025Updated 8 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆162Nov 30, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆65Aug 25, 2024Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆30Nov 19, 2024Updated last year
- ☆29Oct 27, 2023Updated 2 years ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Feb 22, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆84Jul 24, 2025Updated 6 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆85May 9, 2025Updated 9 months ago
- [ICLR 2025] Official PyTorch Implementation for CPE: Concept Pinpoint Eraser for Text-to-image Diffusion Models via Residual Attention Ga…☆12Apr 7, 2025Updated 10 months ago
- Panda Guard is designed for researching jailbreak attacks, defenses, and evaluation algorithms for large language models (LLMs).☆61Jan 19, 2026Updated 3 weeks ago
- ☆77Dec 19, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆391Oct 29, 2025Updated 3 months ago
- Code and data to go with the Zhu et al. paper "An Objective for Nuanced LLM Jailbreaks"☆36Dec 18, 2024Updated last year
- The PyTorch implementation of 'Multimodal Transformer for Automatic 3D Annotation and Object Detection'.☆31Mar 8, 2023Updated 2 years ago
- A Survey on Jailbreak Attacks and Defenses against Multimodal Generative Models☆302Jan 11, 2026Updated last month
- BPE Tokenizer implementations in C# for Anthropic, OpenAI LLM offerings☆14Oct 5, 2023Updated 2 years ago
- ☆12Apr 14, 2025Updated 10 months ago
- [ICML 2025] RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression☆32Aug 7, 2025Updated 6 months ago
- Implementation of the CVPR2025 paper LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty.☆16Sep 10, 2025Updated 5 months ago
- Chain of Attack: a Semantic-Driven Contextual Multi-Turn attacker for LLM☆39Jan 17, 2025Updated last year
- [ICML 2025] An official source code for paper "FlipAttack: Jailbreak LLMs via Flipping".☆163May 2, 2025Updated 9 months ago