dongjunKANG / VIMLinks
☆10Updated 2 years ago
Alternatives and similar repositories for VIM
Users that are interested in VIM are comparing it to the libraries listed below
Sorting:
- Unofficial re-implementation of "Trusting Your Evidence: Hallucinate Less with Context-aware Decoding"☆33Updated last year
- ☆25Updated 5 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆70Updated 3 years ago
- Official code for ICML 2024 paper on Persona In-Context Learning (PICLe)☆26Updated last year
- [ACL 2023] Knowledge Unlearning for Mitigating Privacy Risks in Language Models☆85Updated last year
- ☆47Updated last year
- EMNLP 2022: "MABEL: Attenuating Gender Bias using Textual Entailment Data" https://arxiv.org/abs/2210.14975☆38Updated last year
- ☆44Updated last year
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆67Updated last year
- ☆29Updated last year
- Restore safety in fine-tuned language models through task arithmetic☆29Updated last year
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆61Updated last year
- AbstainQA, ACL 2024☆28Updated last year
- Methods and evaluation for aligning language models temporally☆30Updated last year
- ☆76Updated last year
- ☆28Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- ☆53Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆119Updated last year
- This repository contains data, code and models for contextual noncompliance.☆24Updated last year
- This code accompanies the paper DisentQA: Disentangling Parametric and Contextual Knowledge with Counterfactual Question Answering.☆16Updated 2 years ago
- ☆57Updated 2 years ago
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆78Updated last year
- A Mechanistic Understanding of Alignment Algorithms: A Case Study on DPO and Toxicity.☆84Updated 8 months ago
- ☆41Updated 2 years ago
- ☆26Updated 2 years ago
- A Survey of Hallucination in Large Foundation Models☆55Updated last year
- ☆177Updated last year
- Text generation using language models with multiple exit heads☆16Updated 2 months ago
- [NeurIPS'23] Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors☆82Updated 11 months ago