ahans30 / BinocularsLinks
[ICML 2024] Binoculars: Zero-Shot Detection of LLM-Generated Text
☆301Updated last year
Alternatives and similar repositories for Binoculars
Users that are interested in Binoculars are comparing it to the libraries listed below
Sorting:
- Ghostbuster: Detecting Text Ghostwritten by Large Language Models (NAACL 2024)☆161Updated last year
- Improving Alignment and Robustness with Circuit Breakers☆226Updated 10 months ago
- ☆245Updated 4 months ago
- Evaluating LLMs with fewer examples☆160Updated last year
- Code accompanying "How I learned to start worrying about prompt formatting".☆109Updated 2 months ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆254Updated 2 months ago
- utilities for decoding deep representations (like sentence embeddings) back to text☆918Updated 2 weeks ago
- Erasing concepts from neural representations with provable guarantees☆232Updated 6 months ago
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆242Updated 6 months ago
- RAID is the largest and most challenging benchmark for AI-generated text detection. (ACL 2024)☆81Updated 2 weeks ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆327Updated 6 months ago
- Code for the paper "Fishing for Magikarp"☆162Updated 3 months ago
- Attribute (or cite) statements generated by LLMs back to in-context information.☆272Updated 10 months ago
- Can AI-Generated Text be Reliably Detected?☆82Updated last year
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆173Updated last year
- Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs☆87Updated 8 months ago
- Functional Benchmarks and the Reasoning Gap☆88Updated 10 months ago
- Python package for measuring memorization in LLMs.☆163Updated last month
- ☆137Updated 3 years ago
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆314Updated 10 months ago
- ☆615Updated last month
- Code to reproduce "Transformers Can Do Arithmetic with the Right Embeddings", McLeish et al (NeurIPS 2024)☆191Updated last year
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆91Updated 9 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆181Updated 6 months ago
- RuLES: a benchmark for evaluating rule-following in language models☆230Updated 5 months ago
- A Comprehensive Assessment of Trustworthiness in GPT Models☆300Updated 11 months ago
- Code to break Llama Guard☆32Updated last year
- ☆138Updated 4 months ago
- Mass-editing thousands of facts into a transformer memory (ICLR 2023)☆510Updated last year
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆115Updated last month