☆31Jul 14, 2023Updated 2 years ago
Alternatives and similar repositories for explore_establish_exploit_llms
Users that are interested in explore_establish_exploit_llms are comparing it to the libraries listed below
Sorting:
- Explore, Establish, Exploit: Red Teaming Language Models from Scratch☆13Jun 21, 2023Updated 2 years ago
- Official implementation of ICLR'24 paper, "Curiosity-driven Red Teaming for Large Language Models" (https://openreview.net/pdf?id=4KqkizX…☆90Mar 15, 2024Updated 2 years ago
- The official implementation of USENIX Security'23 paper "Meta-Sift" -- Ten minutes or less to find a 1000-size or larger clean subset on …☆20Apr 27, 2023Updated 2 years ago
- All in How You Ask for It: Simple Black-Box Method for Jailbreak Attacks☆18Apr 24, 2024Updated last year
- Fine-tuning base models to build robust task-specific models☆34Apr 11, 2024Updated last year
- ☆28Mar 20, 2024Updated 2 years ago
- This repository is the official implementation of the paper "ASSET: Robust Backdoor Data Detection Across a Multiplicity of Deep Learning…☆19Jun 7, 2023Updated 2 years ago
- A re-implementation of the "Red Teaming Language Models with Language Models" paper by Perez et al., 2022☆34Oct 9, 2023Updated 2 years ago
- LLM prompt attacks for hacker CTFs via CTFd.☆15Dec 17, 2023Updated 2 years ago
- ☆75Jul 2, 2021Updated 4 years ago
- ☆15May 19, 2025Updated 10 months ago
- ☆197Nov 26, 2023Updated 2 years ago
- Cross-Care☆11Jun 24, 2024Updated last year
- Code and data for the NAACL 2021 paper: "XFORMAL: A Benchmark for Multilingual Formality Style Transfer"☆12Jun 7, 2021Updated 4 years ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98May 23, 2024Updated last year
- Temporal Neural Networks☆29Mar 2, 2026Updated 2 weeks ago
- Analyzing different ML model comparison metrics☆17Jan 20, 2024Updated 2 years ago
- ☆29Dec 2, 2024Updated last year
- ☆70Feb 4, 2024Updated 2 years ago
- Improved Few-Shot Jailbreaking Can Circumvent Aligned Language Models and Their Defenses (NeurIPS 2024)☆65Jan 11, 2025Updated last year
- Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models☆269May 13, 2024Updated last year
- A curated list of academic events on AI Security & Privacy☆168Aug 22, 2024Updated last year
- Dataset and Evaluation Code for the K-QA Benchmark.☆18May 26, 2024Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Mar 8, 2024Updated 2 years ago
- The repository contains code for Adaptive Data Optimization☆32Dec 9, 2024Updated last year
- ☆10Oct 31, 2022Updated 3 years ago
- EA-HAS-Bench: Energy-Aware Hyperparameter and Architecture Search Benchmark (ICLR Spotlight 2023)☆18Dec 8, 2024Updated last year
- ☆11Jul 6, 2024Updated last year
- Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.☆61Jun 15, 2022Updated 3 years ago
- ☆11Oct 25, 2021Updated 4 years ago
- Python library for converting UTF to WX and vice-versa for Indian languages.☆11Jan 17, 2019Updated 7 years ago
- Python library for solving reinforcement learning (RL) problems using generative models.☆11Feb 18, 2025Updated last year
- TARNet Model with tensorflow 2 API.☆11Jun 7, 2025Updated 9 months ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆352Oct 17, 2025Updated 5 months ago
- Restore safety in fine-tuned language models through task arithmetic☆32Mar 28, 2024Updated last year
- ☆72Feb 16, 2025Updated last year
- ☆23Sep 20, 2023Updated 2 years ago
- Lakera - ChatGPT Data Leak Protection☆29Jul 4, 2024Updated last year
- This repo is the official implementation of the ICLR'23 paper "Towards Robustness Certification Against Universal Perturbations." We calc…☆12Feb 14, 2023Updated 3 years ago