zjunlp / EasyDetectLinks
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
☆34Updated 5 months ago
Alternatives and similar repositories for EasyDetect
Users that are interested in EasyDetect are comparing it to the libraries listed below
Sorting:
- An Easy-to-use Hallucination Detection Framework for LLMs.☆59Updated last year
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆81Updated 2 months ago
- ☆55Updated 8 months ago
- 【ACL 2024】 SALAD benchmark & MD-Judge☆156Updated 4 months ago
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆59Updated 9 months ago
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆77Updated 9 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆58Updated 6 months ago
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆34Updated 8 months ago
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆95Updated last year
- ☆33Updated 9 months ago
- The reinforcement learning codes for dataset SPA-VL☆36Updated last year
- [FCS'24] LVLM Safety paper☆18Updated 6 months ago
- In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation (ICML 2024)☆60Updated last year
- LLM Unlearning☆172Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆53Updated 10 months ago
- ☆44Updated 5 months ago
- [ACL 2024] The official codebase for the paper "Self-Distillation Bridges Distribution Gap in Language Model Fine-tuning".☆124Updated 8 months ago
- ☆38Updated last year
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆49Updated 2 months ago
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆65Updated this week
- ☆95Updated 2 months ago
- SLED: Self Logits Evolution Decoding for Improving Factuality in Large Language Model https://arxiv.org/pdf/2411.02433☆27Updated 7 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆29Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆63Updated 4 months ago
- ☆63Updated 2 weeks ago
- ☆23Updated 4 months ago
- Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"☆66Updated last year
- A method of ensemble learning for heterogeneous large language models.☆58Updated 11 months ago
- Official Repository for ACL 2024 Paper SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding☆138Updated last year
- ☆28Updated last year