Arvid-pku / ATOKE
[AAAI 2024] History Matters: Temporal Knowledge Editing in Large Language Model
☆12Updated last year
Alternatives and similar repositories for ATOKE
Users that are interested in ATOKE are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆109Updated 8 months ago
- ☆42Updated 5 months ago
- Source code of ACL 2023 accepted paper "AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression"☆11Updated last year
- ☆74Updated 4 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆114Updated 7 months ago
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆35Updated last year
- ☆11Updated 7 months ago
- ☆23Updated 5 months ago
- ☆41Updated last year
- ☆74Updated 11 months ago
- ☆14Updated 8 months ago
- Code for the ACL-2022 paper "Knowledge Neurons in Pretrained Transformers"☆168Updated last year
- Official Implementation of "Probing Language Models for Pre-training Data Detection"☆19Updated 5 months ago
- [EMNLP 2023] Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models☆24Updated last year
- ☆24Updated 2 years ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆162Updated last year
- ☆72Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆68Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆32Updated 5 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Official implementation of the ACL 2023 paper: "Zero-shot Faithful Factual Error Correction"☆17Updated last year
- Implementation of EMNLP 2023 Findings: Improving Question Generation with Multi-level Content Planning☆19Updated last year
- ☆57Updated 5 months ago
- ☆11Updated last year
- Unofficial re-implementation of "Trusting Your Evidence: Hallucinate Less with Context-aware Decoding"☆29Updated 5 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- [ACL 2024] Unveiling Linguistic Regions in Large Language Models☆31Updated 11 months ago
- Code for the ACL 2023 Paper "Fact-Checking Complex Claims with Program-Guided Reasoning"☆55Updated last year
- [ACL'2024 Findings] "Understanding and Patching Compositional Reasoning in LLMs"☆12Updated 8 months ago
- Safety-J: Evaluating Safety with Critique☆16Updated 9 months ago