glgh / awesome-llm-human-preference-datasetsLinks
A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.
☆367Updated last year
Alternatives and similar repositories for awesome-llm-human-preference-datasets
Users that are interested in awesome-llm-human-preference-datasets are comparing it to the libraries listed below
Sorting:
- RewardBench: the first evaluation tool for reward models.☆604Updated last week
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆347Updated last year
- ☆276Updated 5 months ago
- A large-scale, fine-grained, diverse preference dataset (and models).☆341Updated last year
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆268Updated last year
- All available datasets for Instruction Tuning of Large Language Models☆252Updated last year
- Generative Judge for Evaluating Alignment☆239Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆857Updated 2 weeks ago
- This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca and Flan-T5 on held-out tasks.☆546Updated last year
- Inference-Time Intervention: Eliciting Truthful Answers from a Language Model☆530Updated 4 months ago
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆489Updated 8 months ago
- ☆237Updated 2 years ago
- [ACL'24 Outstanding] Data and code for L-Eval, a comprehensive long context language models evaluation benchmark☆379Updated 11 months ago
- Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]☆556Updated 6 months ago
- This is the repository of HaluEval, a large-scale hallucination evaluation benchmark for Large Language Models.☆479Updated last year
- Data and Code for Program of Thoughts (TMLR 2023)☆276Updated last year
- Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"☆498Updated 5 months ago
- Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them☆497Updated 11 months ago
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆206Updated 2 years ago
- This is a collection of research papers for Self-Correcting Large Language Models with Automated Feedback.☆531Updated 7 months ago
- A package to evaluate factuality of long-form generation. Original implementation of our EMNLP 2023 paper "FActScore: Fine-grained Atomic…☆353Updated 2 months ago
- A curated list of awesome instruction tuning datasets, models, papers and repositories.☆335Updated 2 years ago
- LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA☆217Updated 2 years ago
- [ICML 2024] LESS: Selecting Influential Data for Targeted Instruction Tuning☆456Updated 8 months ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆219Updated last year
- ☆283Updated last year
- A simulation framework for RLHF and alternatives. Develop your RLHF method without collecting human data.☆813Updated 11 months ago
- 🐋 An unofficial implementation of Self-Alignment with Instruction Backtranslation.☆140Updated last month
- This repository provides an original implementation of Detecting Pretraining Data from Large Language Models by *Weijia Shi, *Anirudh Aji…☆224Updated last year
- [EMNLP 2023] The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning☆243Updated last year