oneal2000 / MINDLinks
Source code of our paper MIND, ACL 2024 Long Paper
☆41Updated last year
Alternatives and similar repositories for MIND
Users that are interested in MIND are comparing it to the libraries listed below
Sorting:
- ☆25Updated 5 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆123Updated 8 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆111Updated 8 months ago
- ☆73Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆68Updated last year
- ☆75Updated 5 months ago
- ☆44Updated 6 months ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆34Updated 6 months ago
- ☆42Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆58Updated 6 months ago
- Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models☆97Updated 9 months ago
- ☆49Updated 11 months ago
- ☆50Updated last year
- Code associated with Tuning Language Models by Proxy (Liu et al., 2024)☆111Updated last year
- Code Repo for EfficientRAG: Efficient Retriever for Multi-Hop Question Answering☆48Updated 3 months ago
- A versatile toolkit for applying Logit Lens to modern large language models (LLMs). Currently supports Llama-3.1-8B and Qwen-2.5-7B, enab…☆85Updated 3 months ago
- ☆74Updated last year
- 📜 Paper list on decoding methods for LLMs and LVLMs☆48Updated last month
- ☆24Updated 2 years ago
- [ICLR 2025] InstructRAG: Instructing Retrieval-Augmented Generation via Self-Synthesized Rationales☆97Updated 4 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆64Updated last year
- Official repository for ICML 2024 paper "On Prompt-Driven Safeguarding for Large Language Models"☆92Updated 2 weeks ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆78Updated 4 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆25Updated 8 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆164Updated last year
- [NeurIPS 2024] How do Large Language Models Handle Multilingualism?☆34Updated 6 months ago
- Safety-J: Evaluating Safety with Critique☆16Updated 10 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆59Updated last year