patronus-ai / Lynx-hallucination-detectionLinks
☆43Updated last year
Alternatives and similar repositories for Lynx-hallucination-detection
Users that are interested in Lynx-hallucination-detection are comparing it to the libraries listed below
Sorting:
- Codebase accompanying the Summary of a Haystack paper.☆80Updated last year
- ☆130Updated last year
- Code repo for "Agent Instructs Large Language Models to be General Zero-Shot Reasoners"☆120Updated 3 months ago
- Mixing Language Models with Self-Verification and Meta-Verification☆112Updated last year
- WorkBench: a Benchmark Dataset for Agents in a Realistic Workplace Setting.☆61Updated last month
- Small and Efficient Mathematical Reasoning LLMs☆73Updated 2 years ago
- Official repo for the paper PHUDGE: Phi-3 as Scalable Judge. Evaluate your LLMs with or without custom rubric, reference answer, absolute…☆51Updated last year
- MiniCheck: Efficient Fact-Checking of LLMs on Grounding Documents [EMNLP 2024]☆194Updated 5 months ago
- Lightweight demos for finetuning LLMs. Powered by 🤗 transformers and open-source datasets.☆77Updated last year
- Verifiers for LLM Reinforcement Learning☆80Updated 9 months ago
- ☆82Updated 2 months ago
- Source code for the collaborative reasoner research project at Meta FAIR.☆112Updated 9 months ago
- Source code of the paper: RetrievalQA: Assessing Adaptive Retrieval-Augmented Generation for Short-form Open-Domain Question Answering [F…☆67Updated last year
- Public code repo for paper "SaySelf: Teaching LLMs to Express Confidence with Self-Reflective Rationales"☆112Updated last year
- ☆161Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆279Updated last year
- ☆147Updated last year
- 🔧 Compare how Agent systems perform on several benchmarks. 📊🚀☆103Updated 6 months ago
- Using open source LLMs to build synthetic datasets for direct preference optimization☆72Updated last year
- Reward Model framework for LLM RLHF☆62Updated 2 years ago
- Evaluating LLMs with fewer examples☆169Updated last year
- The code for the paper ROUTERBENCH: A Benchmark for Multi-LLM Routing System☆152Updated last year
- Scalable Meta-Evaluation of LLMs as Evaluators☆43Updated last year
- RAGElo is a set of tools that helps you selecting the best RAG-based LLM agents by using an Elo ranker☆126Updated 3 months ago
- Functional Benchmarks and the Reasoning Gap☆89Updated last year
- A framework for standardizing evaluations of large foundation models, beyond single-score reporting and rankings.☆174Updated last week
- Evaluating LLMs with CommonGen-Lite☆93Updated last year
- The Official Repository for "Bring Your Own Data! Self-Supervised Evaluation for Large Language Models"☆107Updated 2 years ago
- ☆120Updated last year
- Official repository for "Scaling Retrieval-Based Langauge Models with a Trillion-Token Datastore".☆223Updated last month