TrustedLLM / LLMDetLinks
LLMDet is a text detection tool that can identify which generated sources the text came from (e.g. large language model or human-write).
☆77Updated last year
Alternatives and similar repositories for LLMDet
Users that are interested in LLMDet are comparing it to the libraries listed below
Sorting:
- SeqXGPT: An advance method for sentence-level AI-generated text detection.☆92Updated last year
- A survey and reflection on the latest research breakthroughs in LLM-generated Text detection, including data, detectors, metrics, current…☆76Updated 8 months ago
- DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text☆30Updated 2 years ago
- The lastest paper about detection of LLM-generated text and code☆274Updated last month
- (NAACL 2024) Official code repository for Mixset.☆26Updated 8 months ago
- Continuously updated list of related resources for generative LLMs like GPT and their analysis and detection.☆223Updated 2 months ago
- A survey and reflection on the latest research breakthroughs in LLM-generated Text detection, including data, detectors, metrics, current…☆225Updated 7 months ago
- [AAAI 2024] The official repository for our paper, "OUTFOX: LLM-Generated Essay Detection Through In-Context Learning with Adversarially …☆44Updated 4 months ago
- ☆27Updated 2 years ago
- RAID is the largest and most challenging benchmark for AI-generated text detection. (ACL 2024)☆79Updated last week
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆173Updated last year
- M4: Multi-generator, Multi-domain, and Multi-lingual Black-Box Machine-Generated Text Detection☆30Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Code base for ICLR 2024 "Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature".☆322Updated 4 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆84Updated 2 months ago
- ☆16Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆228Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆132Updated last year
- [ACL'24] Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning☆166Updated last month
- Source Code of Paper "GPTScore: Evaluate as You Desire"☆254Updated 2 years ago
- A Survey of Attributions for Large Language Models☆209Updated 11 months ago
- NLPCC-2025 Shared-Task 1: LLM-Generated Text Detection☆14Updated 2 months ago
- ☆13Updated last year
- The source code of paper "CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking"☆75Updated 2 years ago
- Data and code for paper "M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models"☆101Updated 2 years ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆184Updated last year
- Generative Judge for Evaluating Alignment☆244Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆130Updated 10 months ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆58Updated last year
- Constraint Back-translation Improves Complex Instruction Following of Large Language Models☆13Updated 2 months ago