Code to conduct an embedding attack on LLMs
☆32Jan 10, 2025Updated last year
Alternatives and similar repositories for LLM_Embedding_Attack
Users that are interested in LLM_Embedding_Attack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆36Apr 13, 2026Updated 3 weeks ago
- ☆11Dec 18, 2024Updated last year
- [SIGKDD 2023] HardSATGEN: Understanding the Difficulty of Hard SAT Formula Generation and A Strong Structure-Hardness-Aware Baseline☆22Jun 16, 2023Updated 2 years ago
- [NeurIPS 2022 Spotlight] Improving Generative Adversarial Networks via Adversarial Learning in Latent Space☆17Nov 20, 2022Updated 3 years ago
- ☆13Dec 8, 2022Updated 3 years ago
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- [CVPR 2023] Adversarial Robustness via Random Projection Filters☆13Jun 20, 2023Updated 2 years ago
- An AI-powered agent utilizing LLMs (e.g., Claude) with Function Calling to automate tasks in Blender. Enhances workflow efficiency by int…☆23Mar 27, 2025Updated last year
- SEAT☆21Oct 10, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated 2 years ago
- ☆63Aug 11, 2024Updated last year
- A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.☆119Apr 15, 2024Updated 2 years ago
- Code repository for CVPR2024 paper 《Pre-trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness》☆25May 29, 2024Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆96Apr 14, 2026Updated 3 weeks ago
- One Prompt Word is Enough to Boost Adversarial Robustness for Pre-trained Vision-Language Models☆59Apr 25, 2026Updated last week
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Official Code for reproductivity of the NeurIPS 2023 paper: Adversarial Examples Are Not Real Features☆16Jun 27, 2024Updated last year
- [ICLR 2024] Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images☆42Jan 25, 2024Updated 2 years ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- ☆13Dec 22, 2023Updated 2 years ago
- ☆48Jul 14, 2024Updated last year
- The official dataset of paper "Goal-Oriented Prompt Attack and Safety Evaluation for LLMs".☆21Feb 5, 2024Updated 2 years ago
- Code for "Thinking Forward: Memory-Efficient Federated Finetuning of Language Models" (NeurIPS 2024). Spry is a federated learning al…☆12Oct 8, 2024Updated last year
- ☆20Jun 21, 2024Updated last year
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆138Feb 19, 2025Updated last year
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- ICML 2024 Paper "Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies"☆18Jul 10, 2024Updated last year
- Code for paper "The Philosopher’s Stone: Trojaning Plugins of Large Language Models"☆29Sep 11, 2024Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Code for Voice Jailbreak Attacks Against GPT-4o.☆38May 31, 2024Updated last year
- ☆130Feb 3, 2025Updated last year
- Dataset pruning for ImageNet and LAION-2B.☆80Jul 5, 2024Updated last year
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆64Nov 5, 2025Updated 6 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- [NeurIPS 2022] GAMA: Generative Adversarial Multi-Object Scene Attacks☆19Sep 5, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆70Oct 23, 2024Updated last year
- Code&Data for the paper "Watch Out for Your Agents! Investigating Backdoor Threats to LLM-Based Agents" [NeurIPS 2024]☆112Sep 27, 2024Updated last year
- ☆11May 26, 2023Updated 2 years ago
- Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".☆10Feb 6, 2024Updated 2 years ago
- ☆60Jun 5, 2024Updated last year
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆32Oct 2, 2025Updated 7 months ago
- Code Repo for the NeurIPS 2023 paper "VillanDiffusion: A Unified Backdoor Attack Framework for Diffusion Models"☆29Sep 18, 2025Updated 7 months ago