ahans30 / Binoculars
[ICML 2024] Binoculars: Zero-Shot Detection of LLM-Generated Text
☆242Updated 9 months ago
Alternatives and similar repositories for Binoculars:
Users that are interested in Binoculars are comparing it to the libraries listed below
- Ghostbuster: Detecting Text Ghostwritten by Large Language Models (NAACL 2024)☆139Updated 8 months ago
- RAID is the largest and most challenging benchmark for AI-generated text detection. (ACL 2024)☆53Updated this week
- Improving Alignment and Robustness with Circuit Breakers☆186Updated 4 months ago
- Finding trojans in aligned LLMs. Official repository for the competition hosted at SaTML 2024.☆109Updated 8 months ago
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆157Updated last year
- Steer LLM outputs towards a certain topic/subject and enhance response capabilities using activation engineering by adding steering vecto…☆223Updated this week
- Persuasive Jailbreaker: we can persuade LLMs to jailbreak them!☆283Updated 4 months ago
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆108Updated last year
- 📝 Reference-Free automatic summarization evaluation with potential hallucination detection☆101Updated last year
- A survey and reflection on the latest research breakthroughs in LLM-generated Text detection, including data, detectors, metrics, current…☆67Updated 3 months ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆82Updated 3 months ago
- Approximation of the Claude 3 tokenizer by inspecting generation stream☆123Updated 7 months ago
- utilities for decoding deep representations (like sentence embeddings) back to text☆766Updated 3 weeks ago
- Code and results accompanying the paper "Refusal in Language Models Is Mediated by a Single Direction".☆181Updated 4 months ago
- Jailbreaking Leading Safety-Aligned LLMs with Simple Adaptive Attacks [ICLR 2025]☆264Updated last month
- Attribute (or cite) statements generated by LLMs back to in-context information.☆200Updated 4 months ago
- awesome synthetic (text) datasets☆261Updated 3 months ago
- Can AI-Generated Text be Reliably Detected?☆72Updated last year
- Create feature-centric and prompt-centric visualizations for sparse autoencoders (like those from Anthropic's published research).☆182Updated 2 months ago
- Code accompanying "How I learned to start worrying about prompt formatting".☆102Updated 4 months ago
- Extract full next-token probabilities via language model APIs☆230Updated last year
- ☆556Updated 11 months ago
- Completion After Prompt Probability. Make your LLM make a choice☆74Updated 3 months ago
- Run safety benchmarks against AI models and view detailed reports showing how well they performed.☆79Updated this week
- Evaluating LLMs with fewer examples☆145Updated 10 months ago
- Does Refusal Training in LLMs Generalize to the Past Tense? [ICLR 2025]☆61Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆252Updated 7 months ago
- ☆40Updated 6 months ago
- Python package for measuring memorization in LLMs.☆140Updated 3 months ago
- Code for In-context Vectors: Making In Context Learning More Effective and Controllable Through Latent Space Steering☆162Updated last week