scaleapi / browser-artLinks
☆26Updated 3 months ago
Alternatives and similar repositories for browser-art
Users that are interested in browser-art are comparing it to the libraries listed below
Sorting:
- [ICLR'24 Spotlight] A language model (LM)-based emulation framework for identifying the risks of LM agents with tool use☆142Updated last year
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆76Updated last month
- [ICLR 2025] Dissecting adversarial robustness of multimodal language model agents☆88Updated 3 months ago
- ☆25Updated 8 months ago
- Fluent student-teacher redteaming☆21Updated 10 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆69Updated last year
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆61Updated 4 months ago
- ☆9Updated last week
- The official implementation of our pre-print paper "Automatic and Universal Prompt Injection Attacks against Large Language Models".☆49Updated 7 months ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆75Updated last year
- Repo for the research paper "SecAlign: Defending Against Prompt Injection with Preference Optimization"☆48Updated 2 months ago
- Code to break Llama Guard☆31Updated last year
- Adversarial Attacks on GPT-4 via Simple Random Search [Dec 2023]☆42Updated last year
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆133Updated 10 months ago
- ☆39Updated 7 months ago
- Code to generate NeuralExecs (prompt injection for LLMs)☆22Updated 6 months ago
- [ICLR 2025] Official Repository for "Tamper-Resistant Safeguards for Open-Weight LLMs"☆56Updated 3 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆77Updated 6 months ago
- [COLM 2024] JailBreakV-28K: A comprehensive benchmark designed to evaluate the transferability of LLM jailbreak attacks to MLLMs, and fur…☆63Updated 3 weeks ago
- Improving Alignment and Robustness with Circuit Breakers☆208Updated 8 months ago
- Benchmark evaluation code for "SORRY-Bench: Systematically Evaluating Large Language Model Safety Refusal" (ICLR 2025)☆54Updated 3 months ago
- [NeurIPS'24] RedCode: Risky Code Execution and Generation Benchmark for Code Agents☆35Updated last month
- WMDP is a LLM proxy benchmark for hazardous knowledge in bio, cyber, and chemical security. We also release code for RMU, an unlearning m…☆121Updated last week
- ☆30Updated 11 months ago
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆51Updated 7 months ago
- ☆18Updated 7 months ago
- ☆39Updated 8 months ago
- ☆76Updated last month
- TaskTracker is an approach to detecting task drift in Large Language Models (LLMs) by analysing their internal activations. It provides a…☆56Updated 3 months ago
- ☆97Updated last year