patronus-ai / Lynx-hallucination-detectionLinks
β41Updated last year
Alternatives and similar repositories for Lynx-hallucination-detection
Users that are interested in Lynx-hallucination-detection are comparing it to the libraries listed below
Sorting:
- Codebase accompanying the Summary of a Haystack paper.β79Updated 10 months ago
- Lightweight demos for finetuning LLMs. Powered by π€ transformers and open-source datasets.β77Updated 9 months ago
- Mixing Language Models with Self-Verification and Meta-Verificationβ105Updated 7 months ago
- β125Updated 10 months ago
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"β114Updated 10 months ago
- π§ Compare how Agent systems perform on several benchmarks. ππβ99Updated 9 months ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absoluteβ¦β49Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo rankerβ114Updated 3 weeks ago
- Small and Efficient Mathematical Reasoning LLMsβ71Updated last year
- β57Updated 10 months ago
- Evaluating LLMs with fewer examplesβ160Updated last year
- Retrieval Augmented Generation Generalized Evaluation Datasetβ54Updated 2 weeks ago
- Code for NeurIPS LLM Efficiency Challengeβ59Updated last year
- β29Updated this week
- The official evaluation suite and dynamic data release for MixEval.β242Updated 8 months ago
- This project studies the performance and robustness of language models and task-adaptation methods.β150Updated last year
- Source code of the paper: RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering [Fβ¦β66Updated last year
- [NeurIPS 2023] This is the code for the paper `Large Language Model as Attributed Training Data Generator: A Tale of Diversity and Bias`.β152Updated last year
- β23Updated 2 years ago
- Verifiers for LLM Reinforcement Learningβ68Updated 3 months ago
- β118Updated 11 months ago
- Open Implementations of LLM Analysesβ105Updated 9 months ago
- Large-language Model Evaluation framework with Elo Leaderboard and A-B testingβ52Updated 9 months ago
- β53Updated 8 months ago
- Let's build better datasets, together!β260Updated 7 months ago
- β145Updated last year
- A framework for few-shot evaluation of language models.β34Updated 4 months ago
- Functional Benchmarks and the Reasoning Gapβ88Updated 10 months ago
- Manage scalable open LLM inference endpoints in Slurm clustersβ268Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluatorsβ42Updated last year