The Oyster series is a set of safety models developed in-house by Alibaba-AAIG, devoted to building a responsible AI ecosystem. | Oyster 系列是 Alibaba-AAIG 自研的安全模型,致力于构建负责任的 AI 生态。
☆61Apr 2, 2026Updated 2 weeks ago
Alternatives and similar repositories for Oyster
Users that are interested in Oyster are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A Unified Benchmark and Toolbox for Multimodal Jailbreak Attack–Defense Evaluation☆65Mar 2, 2026Updated last month
- Code for Fast Propagation is Better: Accelerating Single-Step Adversarial Training via Sampling Subnetworks (TIFS2024)☆13Mar 29, 2024Updated 2 years ago
- The first toolkit for MLRM safety evaluation, providing unified interface for mainstream models, datasets, and jailbreaking methods!☆15Apr 8, 2025Updated last year
- Röttger et al. (2025): "MSTS: A Multimodal Safety Test Suite for Vision-Language Models"☆17Mar 31, 2025Updated last year
- Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment (NeurIPS 2025)☆60Nov 5, 2025Updated 5 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- video_attack; Efficient Sparse Attacks on Videos using Reinforcement Learning☆15Oct 25, 2021Updated 4 years ago
- The code for ACM MM2024 (Multimodal Unlearnable Examples: Protecting Data against Multimodal Contrastive Learning)☆15Jul 18, 2024Updated last year
- Improving fast adversarial training with prior-guided knowledge (TPAMI2024)☆43Apr 21, 2024Updated last year
- Code repo of our paper Towards Understanding Jailbreak Attacks in LLMs: A Representation Space Analysis (https://arxiv.org/abs/2406.10794…☆24Jul 26, 2024Updated last year
- Global AI Safety and Governance: Never Compromise to Vulnerabilities☆44Sep 11, 2025Updated 7 months ago
- [ICLR 2025] On Evluating the Durability of Safegurads for Open-Weight LLMs☆13Jun 20, 2025Updated 9 months ago
- Official Code and data for ACL 2024 finding, "An Empirical Study on Parameter-Efficient Fine-Tuning for MultiModal Large Language Models"☆25Nov 10, 2024Updated last year
- ☆24Dec 8, 2024Updated last year
- Prompt Generator model for Stable Diffusion Models☆12Jun 20, 2023Updated 2 years ago
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Code for the paper "AsFT: Anchoring Safety During LLM Fune-Tuning Within Narrow Safety Basin".☆36Jul 10, 2025Updated 9 months ago
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆88Feb 26, 2025Updated last year
- Code repository for the paper "Heuristic Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models"☆16Aug 7, 2025Updated 8 months ago
- ☆23Jan 5, 2026Updated 3 months ago
- [NeurIPS25 & ICML25 Workshop on Reliable and Responsible Foundation Models] A Simple Baseline Achieving Over 90% Success Rate Against the…☆91Feb 3, 2026Updated 2 months ago
- [ICCVW 2025 (Oral)] Robust-LLaVA: On the Effectiveness of Large-Scale Robust Image Encoders for Multi-modal Large Language Models☆29Oct 20, 2025Updated 5 months ago
- Emoji Attack [ICML 2025]☆41Jul 15, 2025Updated 9 months ago
- [ECCV'24 Oral] The official GitHub page for ''Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking …☆35Oct 23, 2024Updated last year
- The code of "Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds" CVPR 2024☆36Mar 23, 2024Updated 2 years ago
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- The code for ECCV2022 (Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal)☆44Oct 1, 2022Updated 3 years ago
- ☆16Feb 8, 2024Updated 2 years ago
- ☆59Jun 5, 2024Updated last year
- Official Repo of Your Agent May Misevolve: Emergent Risks in Self-evolving LLM Agents☆72Oct 28, 2025Updated 5 months ago
- ☆11Jul 31, 2022Updated 3 years ago
- ☆47Apr 7, 2025Updated last year
- Code for “SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation(ICLR 2025)”☆27Oct 23, 2025Updated 5 months ago
- ☆128Feb 3, 2025Updated last year
- ☆19Jun 21, 2025Updated 9 months ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- ☆30May 22, 2024Updated last year
- Code for Prior-Guided Adversarial Initialization for Fast Adversarial Training (ECCV2022)☆28Nov 25, 2022Updated 3 years ago
- The official repository of the paper "The Digital Cybersecurity Expert: How Far Have We Come?" presented in IEEE S&P 2025☆24May 21, 2025Updated 10 months ago
- Official implementation for "ALI-Agent: Assessing LLMs'Alignment with Human Values via Agent-based Evaluation"☆21Jan 31, 2026Updated 2 months ago
- [Machine Learning 2023] Imbalanced Gradients: A Subtle Cause of Overestimated Adversarial Robustness☆16Jul 5, 2024Updated last year
- Github repo for NeurIPS 2024 paper "Safe LoRA: the Silver Lining of Reducing Safety Risks when Fine-tuning Large Language Models"☆28Dec 21, 2025Updated 3 months ago
- Safety at Scale: A Comprehensive Survey of Large Model and Agent Safety☆257Updated this week