☆47Feb 25, 2026Updated last week
Alternatives and similar repositories for OVERTHINK_public
Users that are interested in OVERTHINK_public are comparing it to the libraries listed below
Sorting:
- Consuming Resrouce via Auto-generation for LLM-DoS Attack under Black-box Settings☆18Sep 1, 2025Updated 6 months ago
- ☆13Jun 15, 2024Updated last year
- ☆13May 17, 2025Updated 9 months ago
- AI Security Research☆15Jun 21, 2023Updated 2 years ago
- The official code for ``An Engorgio Prompt Makes Large Language Model Babble on''☆21Aug 9, 2025Updated 6 months ago
- ☆39May 17, 2025Updated 9 months ago
- LobotoMl is a set of scripts and tools to assess production deployments of ML services☆10May 16, 2022Updated 3 years ago
- official implementation of [USENIX Sec'25] StruQ: Defending Against Prompt Injection with Structured Queries☆63Nov 10, 2025Updated 3 months ago
- ☆14Oct 6, 2024Updated last year
- ReasoningShield: Safety Detection over Reasoning Traces of Large Reasoning Models☆25Sep 27, 2025Updated 5 months ago
- ☆137Feb 28, 2025Updated last year
- [ICLR2025] Detecting Backdoor Samples in Contrastive Language Image Pretraining☆19Feb 26, 2025Updated last year
- exploiting and defending neural networks(神经网络攻防专栏)☆15Mar 2, 2021Updated 5 years ago
- ☆56May 21, 2025Updated 9 months ago
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆43Jan 25, 2024Updated 2 years ago
- Code for reproducing our paper "Low Rank Adapting Models for Sparse Autoencoder Features"☆17Mar 31, 2025Updated 11 months ago
- ☆122Feb 3, 2025Updated last year
- AI fun☆27Feb 27, 2025Updated last year
- code for "Exploring the Devil in Graph Spectral Domain for 3D Point Cloud Attacks"☆27Aug 8, 2023Updated 2 years ago
- ☆21Jul 22, 2024Updated last year
- The official repository for guided jailbreak benchmark☆29Jul 28, 2025Updated 7 months ago
- 🥇 Amazon Nova AI Challenge Winner - ASTRA emerged victorious as the top attacking team in Amazon's global AI safety competition, defeati…☆70Aug 14, 2025Updated 6 months ago
- 😎 up-to-date & curated list of awesome Attacks on Large-Vision-Language-Models papers, methods & resources.☆505Feb 17, 2026Updated 2 weeks ago
- [AAAI 2025] More Text, Less Point: Towards 3D Data-Efficient Point-Language Understanding☆26May 27, 2025Updated 9 months ago
- To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models☆33May 21, 2025Updated 9 months ago
- Bag of Tricks: Benchmarking of Jailbreak Attacks on LLMs. Empirical tricks for LLM Jailbreaking. (NeurIPS 2024)☆163Nov 30, 2024Updated last year
- Official implementation of paper: DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM Jailbreakers☆66Aug 25, 2024Updated last year
- ECCV2024: Adversarial Prompt Tuning for Vision-Language Models☆31Nov 19, 2024Updated last year
- ☆29Oct 27, 2023Updated 2 years ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆70Feb 22, 2024Updated 2 years ago
- [EMNLP 2025] TokenSkip: Controllable Chain-of-Thought Compression in LLMs☆202Nov 30, 2025Updated 3 months ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆31Aug 4, 2024Updated last year
- Mask-Enhanced Autoregressive Prediction: Pay Less Attention to Learn More☆34May 17, 2025Updated 9 months ago
- [ICRA 2024] WLST: Weak Labels Guided Self-training for Weakly-supervised Domain Adaptation on 3D Object Detection☆12Feb 6, 2024Updated 2 years ago
- Next-Toggle is just a simple plug and use, theme toggle button with multiple light and dark themes.☆11May 9, 2024Updated last year
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…