yinzhangyue / SelfAwareLinks
Do Large Language Models Know What They Don’t Know?
☆97Updated 7 months ago
Alternatives and similar repositories for SelfAware
Users that are interested in SelfAware are comparing it to the libraries listed below
Sorting:
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆65Updated last year
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆112Updated 9 months ago
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆78Updated 5 months ago
- ☆43Updated last year
- Feeling confused about super alignment? Here is a reading list☆42Updated last year
- ☆74Updated last year
- ☆62Updated 2 years ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆127Updated 11 months ago
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆85Updated 4 months ago
- Official repository for ACL 2025 paper "Model Extrapolation Expedites Alignment"☆73Updated last month
- ☆31Updated last year
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆69Updated last year
- Paper list of "The Life Cycle of Knowledge in Big Language Models: A Survey"☆59Updated last year
- ☆54Updated 10 months ago
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆114Updated 11 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆81Updated last year
- [ACL 2024] FollowBench: A Multi-level Fine-grained Constraints Following Benchmark for Large Language Models☆101Updated last week
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- Implementation of ICML 23 Paper: Specializing Smaller Language Models towards Multi-Step Reasoning.☆131Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- ☆138Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- ☆40Updated last year
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 10 months ago
- self-adaptive in-context learning☆45Updated 2 years ago
- Exchange-of-Thought: Enhancing Large Language Model Capabilities through Cross-Model Communication☆20Updated last year
- [ACL 2024 Findings] CriticBench: Benchmarking LLMs for Critique-Correct Reasoning☆25Updated last year
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆125Updated 9 months ago
- [AAAI 2025 oral] Evaluating Mathematical Reasoning Beyond Accuracy☆61Updated 6 months ago