☆12May 6, 2022Updated 3 years ago
Alternatives and similar repositories for Adversarial-LWTA-AutoAttack
Users that are interested in Adversarial-LWTA-AutoAttack are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- The repo for paper: Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models.☆14Dec 16, 2024Updated last year
- ☆14Feb 26, 2025Updated last year
- ☆11Dec 18, 2024Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- this is for the ACM MM paper---Backdoor Attack on Crowd Counting☆17Jul 10, 2022Updated 3 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- The implementation of our IEEE S&P 2024 paper "Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples".☆11Jun 28, 2024Updated last year
- Code for the CVPR '23 paper, "Defending Against Patch-based Backdoor Attacks on Self-Supervised Learning"☆10Jun 9, 2023Updated 2 years ago
- Code repo for the paper: Attacking Vision-Language Computer Agents via Pop-ups☆51Dec 23, 2024Updated last year
- [ICLR 2024] "Data Distillation Can Be Like Vodka: Distilling More Times For Better Quality" by Xuxi Chen*, Yu Yang*, Zhangyang Wang, Baha…☆15May 18, 2024Updated last year
- An Orthogonal Classifier for Improving the Adversarial Robustness of Neural Networks☆14Oct 22, 2021Updated 4 years ago
- Official Implementation of NIPS 2022 paper Pre-activation Distributions Expose Backdoor Neurons☆15Jan 13, 2023Updated 3 years ago
- ☆18Dec 10, 2022Updated 3 years ago
- [CVPR2025] Official Repository for IMMUNE: Improving Safety Against Jailbreaks in Multi-modal LLMs via Inference-Time Alignment☆27Jun 11, 2025Updated 9 months ago
- implement Kervolutional Neural Networks (CVPR, 2019) and compare with CNN under the white box attack☆12May 20, 2019Updated 6 years ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆20Oct 28, 2025Updated 5 months ago
- This is the official implementation of the paper "Decoupled Adversarial Contrastive Learning for Self-supervised Adversarial Robustness,"…☆19Jul 15, 2024Updated last year
- Robustify Black-Box Models (ICLR'22 - Spotlight)☆24Jan 29, 2023Updated 3 years ago
- 该仓库包含了计算机视觉论文中写作常用的句式、短语、缩写等。☆11May 12, 2019Updated 6 years ago
- PyTorch implementation of our ICLR 2023 paper titled "Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?".☆12Mar 13, 2023Updated 3 years ago
- ☆14Oct 7, 2022Updated 3 years ago
- Conversion of Electrocardiography paper records to binarization and converting to digital form in order to extract features to feed in th…☆10Dec 16, 2020Updated 5 years ago
- Simple yet effective targeted transferable attack (NeurIPS 2021)☆51Nov 17, 2022Updated 3 years ago
- Official Implementation of MoCA☆15Mar 15, 2022Updated 4 years ago
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- 🥇 LG-AI-Challenge 2022 1위 솔루션 입니다.☆13Jun 6, 2023Updated 2 years ago
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆30Oct 2, 2025Updated 5 months ago
- Towards Defending against Adversarial Examples via Attack-Invariant Features☆12Oct 12, 2023Updated 2 years ago
- ☆20May 6, 2022Updated 3 years ago
- Data-Efficient Backdoor Attacks☆20Jun 15, 2022Updated 3 years ago
- Github repo for One-shot Neural Backdoor Erasing via Adversarial Weight Masking (NeurIPS 2022)☆15Jan 3, 2023Updated 3 years ago
- official implementation of Towards Robust Model Watermark via Reducing Parametric Vulnerability☆17Jun 3, 2024Updated last year
- ☆19Jun 5, 2023Updated 2 years ago
- [CVPR 2024] "Data Poisoning based Backdoor Attacks to Contrastive Learning": official code implementation.☆16Feb 10, 2025Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- [ICLR 2022 official code] Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?☆29Mar 15, 2022Updated 4 years ago
- Official PyTorch implementation of our paper "Dispersing Prompt Expansion for Class-Agnostic Object Detection" (NeurIPS 2024)☆13Jan 19, 2025Updated last year
- Code for paper Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma Distributions.☆49Nov 3, 2023Updated 2 years ago
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆15Aug 7, 2025Updated 7 months ago
- ☆21Mar 17, 2025Updated last year
- [NeurIPS 2022] "Randomized Channel Shuffling: Minimal-Overhead Backdoor Attack Detection without Clean Datasets" by Ruisi Cai*, Zhenyu Zh…☆21Oct 1, 2022Updated 3 years ago
- Code for identifying natural backdoors in existing image datasets.☆15Aug 24, 2022Updated 3 years ago