zwhe99 / X-SIRLinks
[ACL 2024] Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models
☆41Updated last year
Alternatives and similar repositories for X-SIR
Users that are interested in X-SIR are comparing it to the libraries listed below
Sorting:
- Do Large Language Models Know What They Don’t Know?☆102Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆48Updated 2 years ago
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆50Updated 2 years ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆99Updated last month
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆173Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆119Updated last year
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆240Updated 2 years ago
- ☆27Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆85Updated 2 years ago
- Feeling confused about super alignment? Here is a reading list☆44Updated 2 years ago
- [ACL 2023] Code and Data Repo for Paper "Element-aware Summary and Summary Chain-of-Thought (SumCoT)"☆53Updated 2 years ago
- ☆27Updated 2 years ago
- ☆91Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆49Updated 2 years ago
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆175Updated 2 years ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆71Updated 3 years ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆151Updated last year
- ☆78Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆29Updated last year
- Personality Alignment of Language Models☆53Updated 7 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆168Updated 2 years ago
- ☆89Updated 3 years ago
- Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks☆102Updated 2 years ago
- [ACL 2024] Unveiling Linguistic Regions in Large Language Models☆33Updated last year
- A method of ensemble learning for heterogeneous large language models.☆64Updated last year
- A curated reading list for large language model (LLM) alignment. Take a look at our new survey "Large Language Model Alignment: A Survey"…☆80Updated 2 years ago
- ☆28Updated last year
- The Paper List on Data Contamination for Large Language Models Evaluation.☆109Updated 2 weeks ago