flageval-baai / HalluDialLinks
☆19Updated 11 months ago
Alternatives and similar repositories for HalluDial
Users that are interested in HalluDial are comparing it to the libraries listed below
Sorting:
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆50Updated 3 months ago
- ☆47Updated last year
- [EMNLP 2024] The official GitHub repo for the paper "Course-Correction: Safety Alignment Using Synthetic Preferences"☆19Updated 10 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆130Updated 10 months ago
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago
- [ACL 2024] Learning to Edit: Aligning LLMs with Knowledge Editing☆36Updated 11 months ago
- Watch Every Step! LLM Agent Learning via Iterative Step-level Process Refinement (EMNLP 2024 Main Conference)☆61Updated 9 months ago
- [COLING 2025] ToolEyes: Fine-Grained Evaluation for Tool Learning Capabilities of Large Language Models in Real-world Scenarios☆68Updated 2 months ago
- [NeurIPS'24] Weak-to-Strong Search: Align Large Language Models via Searching over Small Language Models☆62Updated 8 months ago
- Official code implementation for the ACL 2025 paper: 'CoT-based Synthesizer: Enhancing LLM Performance through Answer Synthesis'☆26Updated 2 months ago
- The official repository of "Improving Large Language Models via Fine-grained Reinforcement Learning with Minimum Editing Constraint"☆38Updated last year
- [ACL 2024 main] Aligning Large Language Models with Human Preferences through Representation Engineering (https://aclanthology.org/2024.…☆26Updated 10 months ago
- BeHonest: Benchmarking Honesty in Large Language Models☆34Updated 11 months ago
- Implementation of "ACL'24: When Do LLMs Need Retrieval Augmentation? Mitigating LLMs’ Overconfidence Helps Retrieval Augmentation"☆25Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆84Updated 2 months ago
- ☆48Updated 9 months ago
- Constraint Back-translation Improves Complex Instruction Following of Large Language Models☆14Updated 2 months ago
- [ACL 2024] Code for the paper "ALaRM: Align Language Models via Hierarchical Rewards Modeling"☆25Updated last year
- [NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct☆183Updated 6 months ago
- [ICML'2024] Can AI Assistants Know What They Don't Know?☆82Updated last year
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆126Updated 9 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- Benchmarking LLMs' Emotional Alignment with Humans☆107Updated 6 months ago
- [NAACL 2025 Main Selected Oral] Repository for the paper: Prompt Compression for Large Language Models: A Survey☆27Updated 2 months ago
- A method of ensemble learning for heterogeneous large language models.☆58Updated last year
- Official Implementation for the paper "Integrative Decoding: Improving Factuality via Implicit Self-consistency"☆28Updated 4 months ago
- [ACL2024] Planning, Creation, Usage: Benchmarking LLMs for Comprehensive Tool Utilization in Real-World Complex Scenarios☆60Updated last week
- Evaluating the Ripple Effects of Knowledge Editing in Language Models☆56Updated last year
- Official code for paper "SPA-RL: Reinforcing LLM Agent via Stepwise Progress Attribution"☆39Updated 3 weeks ago
- This the implementation of LeCo☆31Updated 6 months ago