☆36Aug 28, 2025Updated 7 months ago
Alternatives and similar repositories for Continuous-AdvTrain
Users that are interested in Continuous-AdvTrain are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ICLR 2026: Implementation of our unlearning method "Partial Model Collapse" introduced in: "Model Collapse Is Not a Bug but a Feature in …☆27Jan 4, 2026Updated 3 months ago
- Code to conduct an embedding attack on LLMs☆31Jan 10, 2025Updated last year
- Official Code for "Baseline Defenses for Adversarial Attacks Against Aligned Language Models"☆33Oct 26, 2023Updated 2 years ago
- REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective☆22Feb 28, 2025Updated last year
- Code for NeurIPS 2024 Paper "Fight Back Against Jailbreaking via Prompt Adversarial Tuning"☆22May 6, 2025Updated 11 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆68Jun 1, 2025Updated 10 months ago
- ☆48Sep 29, 2024Updated last year
- The official repository of 'Unnatural Language Are Not Bugs but Features for LLMs'☆24May 20, 2025Updated 10 months ago
- ☆24Jul 25, 2024Updated last year
- Dataset pruning for ImageNet and LAION-2B.☆79Jul 5, 2024Updated last year
- Flow Matching with Gaussian Process Priors for Probabilistic Time Series Forecasting, ICLR 2025☆28Dec 4, 2025Updated 4 months ago
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- Code for the experiments and websites of the paper "Same Task, Different Circuits"☆33Oct 21, 2025Updated 5 months ago
- Code for the CVPR 2020 article "Adversarial Vertex mixup: Toward Better Adversarially Robust Generalization"☆12Jul 13, 2020Updated 5 years ago
- Proton VPN Special Offer - Get 70% off • AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- [CVPR 2025] Official implementation for JOOD "Playing the Fool: Jailbreaking LLMs and Multimodal LLMs with Out-of-Distribution Strategy"☆22Jun 11, 2025Updated 9 months ago
- Personalized Steering of Large Language Models: Versatile Steering Vectors Through Bi-directional Preference Optimization☆44Jul 28, 2024Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆152Jul 19, 2024Updated last year
- AnyDoor: Test-Time Backdoor Attacks on Multimodal Large Language Models☆61Apr 8, 2024Updated 2 years ago
- [ACL 2024] Defending Large Language Models Against Jailbreaking Attacks Through Goal Prioritization☆29Jul 9, 2024Updated last year
- Code for "Thinking Forward: Memory-Efficient Federated Finetuning of Language Models" (NeurIPS 2024). Spry is a federated learning al…☆12Oct 8, 2024Updated last year
- Independent robustness evaluation of Improving Alignment and Robustness with Short Circuiting☆17Apr 15, 2025Updated 11 months ago
- [ICML 2021] A fast algorithm for fitting robust decision trees. http://proceedings.mlr.press/v139/vos21a.html☆23Feb 15, 2024Updated 2 years ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆61Sep 11, 2025Updated 6 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆22Aug 8, 2025Updated 8 months ago
- Official code for FAccT'21 paper "Fairness Through Robustness: Investigating Robustness Disparity in Deep Learning" https://arxiv.org/abs…☆13Mar 9, 2021Updated 5 years ago
- JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]☆564Apr 4, 2025Updated last year
- ☆27Oct 6, 2024Updated last year
- HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal☆894Aug 16, 2024Updated last year
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆64Mar 2, 2026Updated last month
- Guided Adversarial Attack for Evaluating and Enhancing Adversarial Defenses, NeurIPS Spotlight 2020☆26Dec 23, 2020Updated 5 years ago
- Neurosymbolic transformers for multi-agent communication.☆22Oct 22, 2020Updated 5 years ago
- [ICML 2024] Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models☆158Feb 19, 2026Updated last month
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- The implementation of our IEEE S&P 2024 paper "Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples".☆11Jun 28, 2024Updated last year
- BESA is a differentiable weight pruning technique for large language models.☆17Mar 4, 2024Updated 2 years ago
- Improving Alignment and Robustness with Circuit Breakers☆259Sep 24, 2024Updated last year
- Materials for "Multi-property Steering of Large Language Models with Dynamic Activation Composition"☆14Nov 22, 2024Updated last year
- A Framework for Evaluating AI Agent Safety in Realistic Environments☆31Oct 2, 2025Updated 6 months ago
- Enhancing Domain Adaptation through Prompt Gradient Alignment (NeurIPS 2024)☆15Jun 16, 2024Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆82Apr 3, 2026Updated last week