☆21Aug 19, 2024Updated last year
Alternatives and similar repositories for HalluDial
Users that are interested in HalluDial are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- [ACL 2024] ANAH & [NeurIPS 2024] ANAH-v2 & [ICLR 2025] Mask-DPO☆62Apr 30, 2025Updated 11 months ago
- [EMNLP 2024] A Multi-level Hallucination Diagnostic Benchmark for Tool-Augmented Large Language Models.☆21Sep 23, 2024Updated last year
- ☆49Jan 7, 2024Updated 2 years ago
- ☆14Oct 11, 2023Updated 2 years ago
- Source code for Truth-Aware Context Selection: Mitigating the Hallucinations of Large Language Models Being Misled by Untruthful Contexts☆17Sep 2, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click and start building anything your business needs.
- I-SHEEP: Iterative Self-enHancEmEnt Paradigm of LLMs through Self-Instruct and Self-Assessment☆17Jan 16, 2025Updated last year
- [IJCAI 2024] FactCHD: Benchmarking Fact-Conflicting Hallucination Detection☆90Apr 28, 2024Updated last year
- Safety-J: Evaluating Safety with Critique☆16Jul 28, 2024Updated last year
- ☆22Feb 3, 2024Updated 2 years ago
- Flames is a highly adversarial benchmark in Chinese for LLM's harmlessness evaluation developed by Shanghai AI Lab and Fudan NLP Group.☆63May 21, 2024Updated last year
- ☆13Aug 26, 2024Updated last year
- codes for "Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models"☆12Feb 10, 2025Updated last year
- [NAACL 2025 Main Selected Oral] Repository for the paper: Prompt Compression for Large Language Models: A Survey☆36May 18, 2025Updated 10 months ago
- Competition of Mechanisms: Tracing How Language Models Handle Facts and Counterfactuals☆12May 24, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆27Jun 5, 2023Updated 2 years ago
- ☆15Apr 22, 2024Updated last year
- A Survey of Hallucination in Large Foundation Models☆56Jan 10, 2024Updated 2 years ago
- Repository for the paper "Cognitive Mirage: A Review of Hallucinations in Large Language Models"☆49Oct 21, 2023Updated 2 years ago
- A full fledged mistral+wandb☆13Aug 16, 2024Updated last year
- Implementation for "RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content"☆23Jul 28, 2024Updated last year
- Dataset and evaluation script for "Evaluating Hallucinations in Chinese Large Language Models"☆138Jun 5, 2024Updated last year
- A simple pytorch implementation of baseline based-on CLIP for Image-text Matching.☆19May 25, 2023Updated 2 years ago
- ☆15May 12, 2025Updated 10 months ago
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆22Jan 5, 2024Updated 2 years ago
- LLM evaluation.☆16Nov 7, 2023Updated 2 years ago
- ☆14Oct 28, 2023Updated 2 years ago
- 该项目主要用来做 tcp 穿透内网(这是客户端)☆16Oct 23, 2019Updated 6 years ago
- Token-level Reference-free Hallucination Detection☆97Jul 25, 2023Updated 2 years ago
- ICLR 2025☆31May 21, 2025Updated 10 months ago
- AAAI 2025: Adapting to Non-Stationary Environments: Multi-Armed Bandit Enhanced Retrieval-Augmented Generation on Knowledge Graphs☆18Nov 9, 2024Updated last year
- ☆68Mar 13, 2026Updated 3 weeks ago
- Benchmarking MIAs against LLMs.☆28Oct 8, 2024Updated last year
- DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- ☆15Aug 1, 2019Updated 6 years ago
- Jaehyung Kim et al's ACL 2023 paper on "infoVerse: A Universal Framework for Dataset Characterization with Multidimensional Meta-informat…☆16Jun 28, 2023Updated 2 years ago
- 复旦白泽大模型安全基准测试集(2024年夏季版)☆51Jul 31, 2024Updated last year
- The code and data for "Are Large Pre-Trained Language Models Leaking Your Personal Information?" (Findings of EMNLP '22)☆27Oct 31, 2022Updated 3 years ago
- This is the repository for the paper 'DiaHalu: A Dialogue-level Hallucination Evaluation Benchmark for Large Language Models' (EMNLP2024 …☆18Apr 5, 2025Updated last year
- ☆43Sep 3, 2024Updated last year
- [ICML 2025] Predictive Data Selection: The Data That Predicts Is the Data That Teaches☆63Mar 4, 2025Updated last year