Code to generate NeuralExecs (prompt injection for LLMs)
☆27Oct 5, 2025Updated 5 months ago
Alternatives and similar repositories for LLM_NeuralExec
Users that are interested in LLM_NeuralExec are comparing it to the libraries listed below
Sorting:
- ☆29Aug 31, 2025Updated 6 months ago
- ☆14Mar 9, 2025Updated 11 months ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆39Sep 17, 2025Updated 5 months ago
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆87Jul 24, 2025Updated 7 months ago
- Agent Security Bench (ASB)☆186Oct 27, 2025Updated 4 months ago
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆23Mar 23, 2024Updated last year
- ☆52Feb 8, 2025Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆109Sep 27, 2024Updated last year
- ☆118Jul 2, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆124Feb 19, 2025Updated last year
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆57Mar 22, 2025Updated 11 months ago
- [CVPR 2023] The official implementation of our CVPR 2023 paper "Detecting Backdoors During the Inference Stage Based on Corruption Robust…☆24May 25, 2023Updated 2 years ago
- Attack AlphaZero Go agents (NeurIPS 2022)☆22Dec 3, 2022Updated 3 years ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆61Aug 8, 2024Updated last year
- ☆30Sep 3, 2024Updated last year
- ☆29Oct 23, 2024Updated last year
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆81Sep 1, 2025Updated 6 months ago
- List of T2I safety papers, updated daily, welcome to discuss using Discussions☆67Aug 12, 2024Updated last year
- ☆28Oct 14, 2021Updated 4 years ago
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆31Aug 4, 2024Updated last year
- This is the official repository for the ICLR 2025 accepted paper Badrobot: Manipulating Embodied LLMs in the Physical World.☆41Jun 26, 2025Updated 8 months ago
- Fine-tuning base models to build robust task-specific models☆34Apr 11, 2024Updated last year
- [S&P 2024] Replication Package for "Mind Your Data! Hiding Backdoors in Offline Reinforcement Learning Datasets".☆32Dec 30, 2024Updated last year
- This repository provides a benchmark for prompt injection attacks and defenses in LLMs☆396Oct 29, 2025Updated 4 months ago
- Implemented CURE algorithm from robustness via curvature regularization and vice versa☆32Dec 8, 2022Updated 3 years ago
- ☆11Feb 2, 2026Updated last month
- ☆37Sep 30, 2024Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆106Apr 15, 2024Updated last year
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆199Apr 12, 2025Updated 10 months ago
- This is the source code for Detecting Adversarial Data by Probing Multiple Perturbations Using Expected Perturbation Score (ICML2023).☆40Oct 15, 2024Updated last year
- ☆83Aug 3, 2021Updated 4 years ago
- open source knowledge for Syllabics font design and development☆10Nov 13, 2024Updated last year
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- GUIPilot: A Consistency-based Mobile GUI Testing Approach for Detecting Application-specific Bugs☆14Jan 5, 2026Updated 2 months ago
- On the Robustness of GUI Grounding Models Against Image Attacks☆12Apr 8, 2025Updated 10 months ago
- source files for GloBI website☆10Updated this week
- ☆48Sep 29, 2024Updated last year
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆56Feb 7, 2025Updated last year
- A fast + lightweight implementation of the GCG algorithm in PyTorch☆319May 13, 2025Updated 9 months ago