yinzhangyue / SelfAware
Do Large Language Models Know What They Don’t Know?
☆94Updated 5 months ago
Alternatives and similar repositories for SelfAware:
Users that are interested in SelfAware are comparing it to the libraries listed below
- [ICLR'24 Spotlight] "Adaptive Chameleon or Stubborn Sloth: Revealing the Behavior of Large Language Models in Knowledge Conflicts"☆67Updated last year
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆55Updated last year
- Benchmarking Complex Instruction-Following with Multiple Constraints Composition (NeurIPS 2024 Datasets and Benchmarks Track)☆81Updated 2 months ago
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆124Updated 9 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆63Updated last year
- [NAACL 2024 Outstanding Paper] Source code for the NAACL 2024 paper entitled "R-Tuning: Instructing Large Language Models to Say 'I Don't…☆110Updated 9 months ago
- Official repository for paper "Weak-to-Strong Extrapolation Expedites Alignment"☆74Updated 10 months ago
- ☆73Updated 11 months ago
- Collection of papers for scalable automated alignment.☆89Updated 6 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆114Updated 7 months ago
- Code and data for the paper "Can Large Language Models Understand Real-World Complex Instructions?"(AAAI2024)☆48Updated last year
- ☆61Updated 2 years ago
- ☆31Updated last year
- ☆41Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆76Updated 3 months ago
- ☆53Updated 8 months ago
- [EMNLP 2023] MQuAKE: Assessing Knowledge Editing in Language Models via Multi-Hop Questions☆109Updated 7 months ago
- Towards Systematic Measurement for Long Text Quality☆34Updated 8 months ago
- self-adaptive in-context learning☆44Updated 2 years ago
- Github repository for "FELM: Benchmarking Factuality Evaluation of Large Language Models" (NeurIPS 2023)☆59Updated last year
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆81Updated last year
- ☆86Updated last year
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆139Updated 10 months ago
- Lightweight tool to identify Data Contamination in LLMs evaluation☆50Updated last year
- [ICLR24] The open-source repo of THU-KEG's KoLA benchmark.☆50Updated last year
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- Logiqa2.0 dataset - logical reasoning in MRC and NLI tasks☆90Updated last year
- ☆39Updated last year
- Implementation of the paper: "Making Retrieval-Augmented Language Models Robust to Irrelevant Context"☆70Updated 9 months ago
- Search, Verify and Feedback: Towards Next Generation Post-training Paradigm of Foundation Models via Verifier Engineering☆57Updated 5 months ago