google-deepmind / long-form-factualityLinks
Benchmarking long-form factuality in large language models. Original code for our paper "Long-form factuality in large language models".
☆622Updated this week
Alternatives and similar repositories for long-form-factuality
Users that are interested in long-form-factuality are comparing it to the libraries listed below
Sorting:
- Official repository for ORPO☆458Updated last year
- ☆524Updated 7 months ago
- LLMs can generate feedback on their work, use it to improve the output, and repeat this process iteratively.☆712Updated 9 months ago
- Generative Representational Instruction Tuning☆660Updated 3 weeks ago
- Code for Quiet-STaR☆735Updated 10 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆357Updated 10 months ago
- RewardBench: the first evaluation tool for reward models.☆612Updated last month
- Automatic evals for LLMs☆467Updated 3 weeks ago
- ☆1,027Updated 7 months ago
- Code and data for "Lumos: Learning Agents with Unified Data, Modular Design, and Open-Source LLMs"☆466Updated last year
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆545Updated last year
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆502Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆730Updated 4 months ago
- Data and code for FreshLLMs (https://arxiv.org/abs/2310.03214)☆364Updated last week
- Evaluation suite for LLMs☆353Updated last week
- [ICLR 2024] Lemur: Open Foundation Models for Language Agents☆551Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆242Updated 8 months ago
- [NeurIPS 2024 Spotlight] Buffer of Thoughts: Thought-Augmented Reasoning with Large Language Models☆646Updated 3 weeks ago
- Data and tools for generating and inspecting OLMo pre-training data.☆1,267Updated this week
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆800Updated last month
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆464Updated last year
- Code and Data for Tau-Bench☆666Updated this week
- Banishing LLM Hallucinations Requires Rethinking Generalization☆276Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆503Updated 6 months ago
- [ICML 2024] Official repository for "Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models"☆764Updated 11 months ago
- [ICLR 2024 & NeurIPS 2023 WS] An Evaluator LM that is open-source, offers reproducible evaluation, and inexpensive to use. Specifically d…☆300Updated last year
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Evaluate your LLM's response with Prometheus and GPT4 💯☆963Updated 2 months ago
- xLAM: A Family of Large Action Models to Empower AI Agent Systems☆486Updated last month
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆869Updated this week