zjunlp / EasyDetectLinks
[ACL 2024] An Easy-to-use Hallucination Detection Framework for LLMs.
☆38Updated 11 months ago
Alternatives and similar repositories for EasyDetect
Users that are interested in EasyDetect are comparing it to the libraries listed below
Sorting:
- An Easy-to-use Hallucination Detection Framework for LLMs.☆63Updated last year
- 【ACL 2024】 SALAD benchmark & MD-Judge☆171Updated 10 months ago
- R-Judge: Benchmarking Safety Risk Awareness for LLM Agents (EMNLP Findings 2024)☆99Updated 3 weeks ago
- The reinforcement learning codes for dataset SPA-VL☆44Updated last year
- [ACL 2024] Shifting Attention to Relevance: Towards the Predictive Uncertainty Quantification of Free-Form Large Language Models☆60Updated last year
- [ACL 2024] Code and data for "Machine Unlearning of Pre-trained Large Language Models"☆66Updated last year
- [ICLR'24] RAIN: Your Language Models Can Align Themselves without Finetuning☆98Updated last year
- [ACL'24] A Knowledge-grounded Interactive Evaluation Framework for Large Language Models☆39Updated last year
- [ICML 2024 Oral] Official code repository for MLLM-as-a-Judge.☆89Updated 11 months ago
- ☆174Updated 3 months ago
- ☆37Updated last year
- LLM Unlearning☆181Updated 2 years ago
- [ICLR'26, NAACL'25 Demo] Toolkit & Benchmark for evaluating the trustworthiness of generative foundation models.☆125Updated 5 months ago
- Safe Unlearning: A Surprisingly Effective and Generalizable Solution to Defend Against Jailbreak Attacks☆32Updated last year
- [ICLR 2025] Code and Data Repo for Paper "Latent Space Chain-of-Embedding Enables Output-free LLM Self-Evaluation"☆93Updated last year
- RWKU: Benchmarking Real-World Knowledge Unlearning for Large Language Models. NeurIPS 2024☆89Updated last year
- ☆60Updated last year
- [ICLR 2025] Released code for paper "Spurious Forgetting in Continual Learning of Language Models"☆58Updated 8 months ago
- Can Knowledge Editing Really Correct Hallucinations? (ICLR 2025)☆27Updated 5 months ago
- ☆48Updated 2 years ago
- ☆51Updated 11 months ago
- [ICML 2024] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models.☆84Updated last year
- A framework for evolving and testing question-answering datasets with various models.☆21Updated last year
- Official codebase for "STAIR: Improving Safety Alignment with Introspective Reasoning"☆88Updated 11 months ago
- The repository of the paper "REEF: Representation Encoding Fingerprints for Large Language Models," aims to protect the IP of open-source…☆74Updated last year
- Awesome Large Reasoning Model(LRM) Safety.This repository is used to collect security-related research on large reasoning models such as …☆81Updated last week
- Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning☆36Updated last year
- ☆48Updated 11 months ago
- This repo is for the safety topic, including attacks, defenses and studies related to reasoning and RL☆59Updated 5 months ago
- [ECCV 2024] Official PyTorch Implementation of "How Many Unicorns Are in This Image? A Safety Evaluation Benchmark for Vision LLMs"☆86Updated 2 years ago