ahans30 / Binoculars
[ICML 2024] Binoculars: Zero-Shot Detection of LLM-Generated Text
☆258Updated 10 months ago
Alternatives and similar repositories for Binoculars:
Users that are interested in Binoculars are comparing it to the libraries listed below
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆284Updated 2 months ago
- Improving Alignment and Robustness with Circuit Breakers☆192Updated 6 months ago
- RAID is the largest and most challenging benchmark for AI-generated text detection. (ACL 2024)☆57Updated this week
- Ghostbuster: Detecting Text Ghostwritten by Large Language Models (NAACL 2024)☆149Updated 10 months ago
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆109Updated 9 months ago
- ☆563Updated last year
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆294Updated 5 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆198Updated 6 months ago
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆162Updated last year
- utilities for decoding deep representations (like sentence embeddings) back to text☆788Updated 2 months ago
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆187Updated 10 months ago
- Code for the paper "Fishing for Magikarp"☆151Updated 2 weeks ago
- Python package for measuring memorization in LLMs.☆148Updated 4 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆230Updated last month
- ☆221Updated last week
- Code to break Llama Guard☆31Updated last year
- ☆285Updated last month
- Red-Teaming Language Models with DSPy☆175Updated last month
- ☆168Updated last year
- Pytorch implementation of DetectGPT (https://arxiv.org/pdf/2301.11305v1.pdf)☆202Updated 9 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆84Updated 4 months ago
- [ICML 2024] COLD-Attack: Jailbreaking LLMs with Stealthiness and Controllability☆147Updated 3 months ago
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆68Updated 3 months ago
- A library for making RepE control vectors☆562Updated 2 months ago
- [NDSS'25 Poster] A collection of automated evaluators for assessing jailbreak attempts.☆133Updated 3 weeks ago
- Can AI-Generated Text be Reliably Detected?☆73Updated last year
- A survey and reflection on the latest research breakthroughs in LLM-generated Text detection, including data, detectors, metrics, current…☆211Updated 3 months ago
- LLM Self Defense: By Self Examination, LLMs know they are being tricked☆32Updated 10 months ago
- Package to optimize Adversarial Attacks against (Large) Language Models with Varied Objectives☆67Updated last year
- Code for watermarking language models☆76Updated 6 months ago