martiansideofthemoon / ai-detection-paraphrasesLinks
Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense" (https://arxiv.org/abs/2303.13408).
☆173Updated last year
Alternatives and similar repositories for ai-detection-paraphrases
Users that are interested in ai-detection-paraphrases are comparing it to the libraries listed below
Sorting:
- The lastest paper about detection of LLM-generated text and code☆274Updated last month
- Can AI-Generated Text be Reliably Detected?☆81Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆103Updated last year
- Continuously updated list of related resources for generative LLMs like GPT and their analysis and detection.☆223Updated 2 months ago
- ☆141Updated last year
- Code and data of the EMNLP 2022 paper "Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversaria…☆53Updated 2 years ago
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆106Updated 5 months ago
- A survey and reflection on the latest research breakthroughs in LLM-generated Text detection, including data, detectors, metrics, current…☆225Updated 7 months ago
- [ICML 2025] Weak-to-Strong Jailbreaking on Large Language Models☆84Updated 3 months ago
- ☆28Updated 10 months ago
- DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text☆30Updated 2 years ago
- This is the starter kit for the Trojan Detection Challenge 2023 (LLM Edition), a NeurIPS 2023 competition.☆90Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆94Updated 2 months ago
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆85Updated last year
- A lightweight library for large laguage model (LLM) jailbreaking defense.☆54Updated 9 months ago
- UP-TO-DATE LLM Watermark paper. 🔥🔥🔥☆351Updated 7 months ago
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆143Updated 7 months ago
- LLM Unlearning☆172Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆96Updated last year
- Official Repository for Dataset Inference for LLMs☆36Updated last year
- Repo for arXiv preprint "Gradient-based Adversarial Attacks against Text Transformers"☆107Updated 2 years ago
- Code for watermarking language models☆80Updated 11 months ago
- AmpleGCG: Learning a Universal and Transferable Generator of Adversarial Attacks on Both Open and Closed LLM☆69Updated 9 months ago
- ☆216Updated 4 years ago
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆228Updated last year
- Official implementation of AdvPrompter https//arxiv.org/abs/2404.16873☆160Updated last year
- Training data extraction on GPT-2☆190Updated 2 years ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆79Updated 4 years ago
- ☆178Updated last year
- Implementation of the paper "Exploring the Universal Vulnerability of Prompt-based Learning Paradigm" on Findings of NAACL 2022☆30Updated 3 years ago