flageval-baai / HalluDialLinks
☆20Updated last year
Alternatives and similar repositories for HalluDial
Users that are interested in HalluDial are comparing it to the libraries listed below
Sorting:
- ☆47Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆88Updated 4 months ago
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆53Updated 4 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆137Updated last year
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆60Updated 11 months ago
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆26Updated 11 months ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆47Updated last year
- [ICLR 2024] Evaluating Large Language Models at Evaluating Instruction Following☆131Updated last year
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆69Updated 4 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆30Updated 4 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆83Updated last year
- [EMNLP 2024] Source code for the paper "Learning Planning-based Reasoning with Trajectory Collection and Process Rewards Synthesizing".☆82Updated 8 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆59Updated 9 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated last year
- Benchmarking LLMs' Emotional Alignment with Humans☆111Updated 7 months ago
- ☆20Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆130Updated 10 months ago
- Implementation of "ACL'24: When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation"☆24Updated last year
- ☆43Updated 5 months ago
- ☆49Updated 10 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆62Updated 9 months ago
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 11 months ago
- Semi-Parametric Editing with a Retrieval-Augmented Counterfactual Model☆68Updated 2 years ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated last year
- Do Large Language Models Know What They Don’t Know?☆99Updated 10 months ago
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆63Updated last month
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆70Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆61Updated 2 months ago