HumanSignal / RLHF
Collection of links, tutorials and best practices of how to collect the data and build end-to-end RLHF system to finetune Generative AI models
☆214Updated last year
Alternatives and similar repositories for RLHF:
Users that are interested in RLHF are comparing it to the libraries listed below
- Generative Representational Instruction Tuning☆596Updated last month
- Toolkit for attaching, training, saving and loading of new heads for transformer models☆262Updated 2 weeks ago
- ToolQA, a new dataset to evaluate the capabilities of LLMs in answering challenging questions with external tools. It offers two levels …☆251Updated last year
- awesome synthetic (text) datasets☆261Updated 3 months ago
- Official repository for ORPO☆437Updated 8 months ago
- A curated list of Human Preference Datasets for LLM fine-tuning, RLHF, and eval.☆344Updated last year
- Manage scalable open LLM inference endpoints in Slurm clusters☆252Updated 7 months ago
- A comprehensive guide to LLM evaluation methods designed to assist in identifying the most suitable evaluation techniques for various use…☆92Updated last week
- SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models☆492Updated 7 months ago
- Sample notebooks and prompts for LLM evaluation☆120Updated 2 months ago
- LLM Workshop by Sourab Mangrulkar☆363Updated 8 months ago
- Starter pack for NeurIPS LLM Efficiency Challenge 2023.☆124Updated last year
- Code and data for "Lost in the Middle: How Language Models Use Long Contexts"☆333Updated last year
- Notes and commented code for RLHF (PPO)☆69Updated 11 months ago
- Open Implementations of LLM Analyses☆98Updated 4 months ago
- Benchmarking library for RAG☆166Updated last week
- The official evaluation suite and dynamic data release for MixEval.☆231Updated 3 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆348Updated 5 months ago
- Resources relating to the DLAI event: https://www.youtube.com/watch?v=eTieetk2dSw☆183Updated last year
- This project studies the performance and robustness of language models and task-adaptation methods.☆144Updated 9 months ago
- An extensible benchmark for evaluating large language models on planning☆323Updated 8 months ago
- [Data + code] ExpertQA : Expert-Curated Questions and Attributed Answers☆125Updated 11 months ago
- Truth Forest: Toward Multi-Scale Truthfulness in Large Language Models through Intervention without Tuning☆45Updated last year
- [EMNLP 2023] Enabling Large Language Models to Generate Text with Citations. Paper: https://arxiv.org/abs/2305.14627☆475Updated 4 months ago
- Github repository for "RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models"☆147Updated 2 months ago
- Automatic Evals for LLMs☆266Updated this week
- Welcome to the LLMs Interview Prep Guide! This GitHub repository offers a curated set of interview questions and answers tailored for Dat…☆122Updated last year
- BABILong is a benchmark for LLM evaluation using the needle-in-a-haystack approach.☆185Updated 3 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆803Updated last week
- Code for STaR: Bootstrapping Reasoning With Reasoning (NeurIPS 2022)☆197Updated last year