alibaba / eigenscore
☆18Updated 4 months ago
Alternatives and similar repositories for eigenscore:
Users that are interested in eigenscore are comparing it to the libraries listed below
- Source code of ACL 2023 accepted paper "AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression"☆11Updated last year
- ☆11Updated last year
- ☆73Updated 10 months ago
- ☆41Updated 4 months ago
- ☆24Updated 2 years ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆109Updated 6 months ago
- Source code of our paper MIND, ACL 2024 Long Paper☆39Updated 10 months ago
- [ACL'2024 Findings] "Understanding and Patching Compositional Reasoning in LLMs"☆12Updated 7 months ago
- ☆164Updated 9 months ago
- ☆71Updated 3 months ago
- [NAACL'25 Oral] Steering Knowledge Selection Behaviours in LLMs via SAE-Based Representation Engineering☆53Updated 4 months ago
- ☆12Updated 7 months ago
- awesome SAE papers☆25Updated last month
- This is the repository for our paper: Untying the Reversal Curse via Bidirectional Language Model Editing☆10Updated last year
- ☆41Updated last year
- ☆30Updated 2 weeks ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆58Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆112Updated 6 months ago
- EMNLP'2023: Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration☆36Updated last year
- code for EMNLP 2024 paper: Neuron-Level Knowledge Attribution in Large Language Models☆29Updated 4 months ago
- ☆17Updated last year
- ☆47Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆54Updated 11 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆35Updated 7 months ago
- [NeurIPS 2024] Can Language Models Learn to Skip Steps?☆15Updated 2 months ago
- ☆69Updated last year
- ☆21Updated last year
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- [NeurIPS 2024] Uncertainty of Thoughts: Uncertainty-Aware Planning Enhances Information Seeking in Large Language Models☆91Updated 8 months ago
- Repository for Label Words are Anchors: An Information Flow Perspective for Understanding In-Context Learning☆162Updated last year