zjunlp / EasyDetectLinks
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
☆35Updated 6 months ago
Alternatives and similar repositories for EasyDetect
Users that are interested in EasyDetect are comparing it to the libraries listed below
Sorting:
- An Easy-to-use Hallucination Detection Framework for LLMs.☆61Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆158Updated 5 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 11 months ago
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆52Updated 3 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆28Updated 8 months ago
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆50Updated 3 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆85Updated 3 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆128Updated 9 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆97Updated last year
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆61Updated last year
- LLM Unlearning☆174Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆78Updated 10 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆60Updated 7 months ago
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆53Updated 11 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆34Updated 9 months ago
- The code and data of DPA-RAG, accepted by WWW 2025 main conference.☆61Updated 7 months ago
- Our research proposes a novel MoGU framework that improves LLMs' safety while preserving their usability.☆15Updated 7 months ago
- [EMNLP 2024] The official GitHub repo for the survey paper "Knowledge Conflicts for LLMs: A Survey"☆133Updated 11 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆69Updated last year
- [ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models☆37Updated last year
- ☆47Updated last year
- [2025-TMLR] A Survey on the Honesty of Large Language Models☆58Updated 8 months ago
- A method of ensemble learning for heterogeneous large language models.☆58Updated last year
- [ICLR 25 Oral] RM-Bench: Benchmarking Reward Models of Language Models with Subtlety and Style☆59Updated last month
- [FCS'24] LVLM Safety paper☆18Updated 7 months ago
- [ICLR'25] DataGen: Unified Synthetic Dataset Generation via Large Language Models☆62Updated 5 months ago
- Code for the 2025 ACL publication "Fine-Tuning on Diverse Reasoning Chains Drives Within-Inference CoT Refinement in LLMs"☆31Updated 2 months ago
- The reinforcement learning codes for dataset SPA-VL☆36Updated last year
- [EMNLP 2024] To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models☆44Updated 7 months ago
- ☆39Updated last year