LLNL / LUARLinks
Transformer-based model for learning authorship representations.
☆46Updated last year
Alternatives and similar repositories for LUAR
Users that are interested in LUAR are comparing it to the libraries listed below
Sorting:
- ☆28Updated last year
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆151Updated 3 months ago
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆181Updated 2 years ago
- DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text☆31Updated 2 years ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 3 years ago
- ☆80Updated last year
- ☆156Updated 2 years ago
- M4: Multi-generator, Multi-domain, and Multi-lingual Black-Box Machine-Generated Text Detection☆39Updated last year
- ☆57Updated 2 years ago
- ☆89Updated 11 months ago
- Repository for the Bias Benchmark for QA dataset.☆132Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Updated 2 years ago
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆84Updated 4 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆61Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆86Updated last year
- ☆116Updated last year
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆104Updated last year
- ☆47Updated 2 months ago
- ☆38Updated 2 years ago
- RAID is the largest and most challenging benchmark for AI-generated text detection. (ACL 2024)☆101Updated last week
- ☆14Updated 2 years ago
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Repo for paper: Examining LLMs' Uncertainty Expression Towards Questions Outside Parametric Knowledge☆14Updated last year
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆109Updated last year
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- ☆47Updated last year
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated 11 months ago
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆57Updated last year
- ☆44Updated last year
- ICLR2024 Paper. Showing properties of safety tuning and exaggerated safety.☆89Updated last year