llnl / LUARLinks
Transformer-based model for learning authorship representations.
☆47Updated last year
Alternatives and similar repositories for LUAR
Users that are interested in LUAR are comparing it to the libraries listed below
Sorting:
- M4: Multi-generator, Multi-domain, and Multi-lingual Black-Box Machine-Generated Text Detection☆42Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆59Updated last year
- ☆84Updated last week
- Repository for the Bias Benchmark for QA dataset.☆136Updated 2 years ago
- ☆91Updated last year
- ☆28Updated last year
- Dataset associated with "BOLD: Dataset and Metrics for Measuring Biases in Open-Ended Language Generation" paper☆85Updated 4 years ago
- Official repository for our NeurIPS 2023 paper "Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense…☆188Updated 2 years ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆63Updated 2 years ago
- DetectLLM: Leveraging Log Rank Information for Zero-Shot Detection of Machine-Generated Text☆32Updated 2 years ago
- Paper list for the survey "Combating Misinformation in the Age of LLMs: Opportunities and Challenges" and the initiative "LLMs Meet Misin…☆106Updated last year
- ☆158Updated 2 years ago
- ☆47Updated 4 months ago
- ACL 2022: An Empirical Survey of the Effectiveness of Debiasing Techniques for Pre-trained Language Models.☆154Updated 5 months ago
- ☆43Updated last year
- ☆26Updated 2 years ago
- ☆58Updated 2 years ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆83Updated last year
- Repo accompanying our paper "Do Llamas Work in English? On the Latent Language of Multilingual Transformers".☆80Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Updated 2 years ago
- Token-level Reference-free Hallucination Detection☆98Updated 2 years ago
- Codes and datasets of the paper Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment☆108Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆86Updated last year
- ☆117Updated last year
- ☆27Updated 2 years ago
- A Survey of Hallucination in Large Foundation Models☆55Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆119Updated last year
- Röttger et al. (NAACL 2024): "XSTest: A Test Suite for Identifying Exaggerated Safety Behaviours in Large Language Models"☆127Updated 11 months ago