Code to generate NeuralExecs (prompt injection for LLMs)
☆27Oct 5, 2025Updated 7 months ago
Alternatives and similar repositories for LLM_NeuralExec
Users that are interested in LLM_NeuralExec are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆29Aug 31, 2025Updated 8 months ago
- ☆29Sep 15, 2024Updated last year
- [USENIX Security 2025] PoisonedRAG: Knowledge Corruption Attacks to Retrieval-Augmented Generation of Large Language Models☆263Jan 27, 2026Updated 3 months ago
- ☆14Jun 6, 2023Updated 2 years ago
- Agent Security Bench (ASB)☆228Apr 16, 2026Updated 2 weeks ago
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Implementation of BadCLIP https://arxiv.org/pdf/2311.16194.pdf☆24Mar 23, 2024Updated 2 years ago
- [CCS 2024] Optimization-based Prompt Injection Attack to LLM-as-a-Judge☆40Sep 17, 2025Updated 7 months ago
- ☆70Feb 4, 2024Updated 2 years ago
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆138Feb 19, 2025Updated last year
- Distribution Preserving Backdoor Attack in Self-supervised Learning☆20Jan 27, 2024Updated 2 years ago
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆96Apr 14, 2026Updated 3 weeks ago
- Codes for our paper "AgentMonitor: A Plug-and-Play Framework for Predictive and Secure Multi-Agent Systems"☆13Dec 13, 2024Updated last year
- Code for "Adversarial Illusions in Multi-Modal Embeddings"☆32Aug 4, 2024Updated last year
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- ☆135Jul 2, 2024Updated last year
- Attack AlphaZero Go agents (NeurIPS 2022)☆22Dec 3, 2022Updated 3 years ago
- Code for paper "Poisoned classifiers are not only backdoored, they are fundamentally broken"☆26Jan 7, 2022Updated 4 years ago
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆87Sep 1, 2025Updated 8 months ago
- Official repository for "Robust Prompt Optimization for Defending Language Models Against Jailbreaking Attacks"☆62Aug 8, 2024Updated last year
- ☆31Oct 23, 2024Updated last year
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- use angr to deobfuscation☆10Oct 8, 2019Updated 6 years ago
- [NeurIPS 2024] Official implementation for "AgentPoison: Red-teaming LLM Agents via Memory or Knowledge Base Backdoor Poisoning"☆214Apr 12, 2025Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- ☆84Aug 3, 2021Updated 4 years ago
- Code and datasets for the salesforce AI research paper on prompt leakage and multi-turn threats against LLMs☆22Nov 10, 2025Updated 5 months ago
- [S&P 2024] Replication Package for "Mind Your Data! Hiding Backdoors in Offline Reinforcement Learning Datasets".☆33Dec 30, 2024Updated last year
- This is the official repository for the ICLR 2025 accepted paper Badrobot: Manipulating Embodied LLMs in the Physical World.☆43Jun 26, 2025Updated 10 months ago
- ☆29Oct 14, 2021Updated 4 years ago
- TrustAgent: Towards Safe and Trustworthy LLM-based Agents☆56Feb 7, 2025Updated last year
- ☆33Jan 26, 2025Updated last year
- ☆48Sep 29, 2024Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆119Apr 15, 2024Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Code for Robust Fine-tuning (RbFT)☆17Jan 31, 2025Updated last year
- ☆36Sep 30, 2024Updated last year
- RAB: Provable Robustness Against Backdoor Attacks☆39Oct 3, 2023Updated 2 years ago
- Fine-tuning base models to build robust task-specific models☆35Apr 11, 2024Updated 2 years ago
- [Findings of EMNLP 2022] Expose Backdoors on the Way: A Feature-Based Efficient Defense against Textual Backdoor Attacks☆13Feb 26, 2023Updated 3 years ago
- [USENIX Security '24] An LLM-Assisted Easy-to-Trigger Backdoor Attack on Code Completion Models: Injecting Disguised Vulnerabilities agai…☆58Mar 22, 2025Updated last year
- Metaskill: A Meta-Skill for Autonomous AI Agent Team Generation☆35Feb 23, 2026Updated 2 months ago