zwhe99 / X-SIRLinks
[ACL 2024] Can Watermarks Survive Translation? On the Cross-lingual Consistency of Text Watermark for Large Language Models
☆39Updated last year
Alternatives and similar repositories for X-SIR
Users that are interested in X-SIR are comparing it to the libraries listed below
Sorting:
- Recent papers on (1) Psychology of LLMs; (2) Biases in LLMs.☆49Updated last year
- Do Large Language Models Know What They Don’t Know?☆99Updated 9 months ago
- ☆47Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆27Updated 2 years ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆134Updated 11 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- ☆27Updated 2 years ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆163Updated last year
- [ACL 2023] Code and Data Repo for Paper "Element-aware Summary and Summary Chain-of-Thought (SumCoT)"☆54Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆114Updated 11 months ago
- ☆38Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Updated last year
- The implementation for our paper, "Improving Simultaneous Machine Translation with Monolingual Data," accepted to AAAI 2023. 🎉☆12Updated 2 years ago
- ☆81Updated 8 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆85Updated 3 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆20Updated last year
- [ACL 2024] Unveiling Linguistic Regions in Large Language Models☆31Updated last year
- An open-source library for contamination detection in NLP datasets and Large Language Models (LLMs).☆56Updated last year
- The implement of ACL2024: "MAPO: Advancing Multilingual Reasoning through Multilingual Alignment-as-Preference Optimization"☆42Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆132Updated 2 years ago
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆27Updated last year
- ☆56Updated last year
- Code and data for "ConflictBank: A Benchmark for Evaluating the Influence of Knowledge Conflicts in LLM" (NeurIPS 2024 Track Datasets and…☆49Updated 3 months ago
- BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).☆154Updated last year
- ☆30Updated last year
- self-adaptive in-context learning☆45Updated 2 years ago