TrustedLLM / LLMDetLinks
LLMDet is a text detection tool that can identify which generated sources the text came from (e.g. large language model or human-write).
☆77Updated last year
Alternatives and similar repositories for LLMDet
Users that are interested in LLMDet are comparing it to the libraries listed below
Sorting:
- DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text☆30Updated 2 years ago
- SeqXGPT: An advance method for sentence-level AI-generated text detection.☆92Updated last year
- The lastest paper about detection of LLM-generated text and code☆277Updated 2 months ago
- (NAACL 2024) Official code repository for Mixset.☆26Updated 9 months ago
- [AAAI 2024] The official repository for our paper, "OUTFOX: LLM-Generated Essay Detection Through In-Context Learning with Adversarially …☆43Updated 5 months ago
- A survey and reflection on the latest research breakthroughs in LLM-generated Text detection, including data, detectors, metrics, current…☆76Updated 9 months ago
- ☆27Updated 2 years ago
- M4: Multi-generator, Multi-domain, and Multi-lingual Black-Box Machine-Generated Text Detection☆31Updated last year
- A survey and reflection on the latest research breakthroughs in LLM-generated Text detection, including data, detectors, metrics, current…☆227Updated 8 months ago
- Continuously updated list of related resources for generative LLMs like GPT and their analysis and detection.☆224Updated 3 months ago
- RAID is the largest and most challenging benchmark for AI-generated text detection. (ACL 2024)☆83Updated last week
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆174Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆231Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆88Updated 4 months ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆61Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆135Updated 11 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆158Updated 6 months ago
- Code base for ICLR 2024 "Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature".☆339Updated this week
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆157Updated last year
- ☆14Updated last year
- ☆83Updated last year
- [NAACL2024] Attacks, Defenses and Evaluations for LLM Conversation Safety: A Survey☆106Updated last year
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆256Updated 2 years ago
- ☆16Updated last year
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆164Updated last year
- Code base for ICLR 2025 "Glimpse: Enabling White-Box Methods to Use Proprietary Models for Zero-Shot LLM-Generated Text Detection"☆41Updated last week
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆56Updated last year
- Constraint Back-translation Improves Complex Instruction Following of Large Language Models☆15Updated 3 months ago
- Fact-Checking the Output of Generative Large Language Models in both Annotation and Evaluation.☆105Updated last year