YichenZW / Robust-DetLinks
The code implementation of the paper Stumbling Blocks: Stress Testing the Robustness of Machine-Generated Text Detectors Under Attacks (ACL 2024 main) by Yichen Wang, Shangbin Feng, Abe Bohan Hou, Xiao Pu, Chao Shen, Xiaoming Liu, and Yulia Tsvetkov, and Tianxing He, mainly at Paul G. Allen School of CSE, University of Washington.
☆13Updated last year
Alternatives and similar repositories for Robust-Det
Users that are interested in Robust-Det are comparing it to the libraries listed below
Sorting:
- ☆89Updated last year
- ☆71Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆48Updated last year
- ☆20Updated last year
- [AAAI 2024] History Matters: Temporal Knowledge Editing in Large Language Model☆14Updated 2 years ago
- ☆55Updated last year
- ☆38Updated last year
- [EMNLP 2023] Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models☆25Updated 2 years ago
- ☆25Updated 2 years ago
- awesome SAE papers☆69Updated 7 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆150Updated last year
- ☆48Updated 2 years ago
- ☆38Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆118Updated last year
- Code for the arxiv paper: Complex Claim Verification with Evidence Retrieved in the Wild☆13Updated 2 years ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- Source code of our paper MIND, ACL 2024 Long Paper☆59Updated last month
- ☆28Updated last year
- This repository contains data and code used for On the Risk of Misinformation Pollution with Large Language Models (EMNLP 2023 Findings).☆16Updated 2 years ago
- AbstainQA, ACL 2024☆28Updated last year
- Unofficial re-implementation of "Trusting Your Evidence: Hallucinate Less with Context-aware Decoding"☆33Updated last year
- ☆13Updated 2 years ago
- ☆11Updated last year
- ☆156Updated 2 years ago
- The implement of paper:"ReDeEP: Detecting Hallucination in Retrieval-Augmented Generation via Mechanistic Interpretability"☆56Updated 7 months ago
- Official code for our paper "Reasoning Models Hallucinate More: Factuality-Aware Reinforcement Learning for Large Reasoning Models"☆20Updated 2 months ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Updated 2 years ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆82Updated last year
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆173Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆87Updated last year