opendilab / awesome-RLHF
A curated list of reinforcement learning with human feedback resources (continually updated)
☆3,901Updated 2 months ago
Alternatives and similar repositories for awesome-RLHF:
Users that are interested in awesome-RLHF are comparing it to the libraries listed below
- Reference implementation for DPO (Direct Preference Optimization)☆2,542Updated 8 months ago
- A modular RL library to fine-tune language models to human preferences☆2,304Updated last year
- A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models".☆2,033Updated last year
- Benchmarking large language models' complex reasoning ability with chain-of-thought prompting☆2,718Updated 8 months ago
- Aligning pretrained language models with instruction data generated by themselves.☆4,353Updated 2 years ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆3,939Updated last week
- Train transformer language models with reinforcement learning.☆13,373Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)☆4,627Updated last year
- [ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters☆5,863Updated last year
- An Open-source Toolkit for LLM Development☆2,774Updated 3 months ago
- Beyond the Imitation Game collaborative benchmark for measuring and extrapolating the capabilities of language models☆3,034Updated 9 months ago
- Secrets of RLHF in Large Language Models Part I: PPO☆1,360Updated last year
- An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.☆1,724Updated 4 months ago
- Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback☆1,449Updated 10 months ago
- An open-source framework for training large multimodal models.☆3,898Updated 7 months ago
- Reasoning in LLMs: Papers and Resources, including Chain-of-Thought, OpenAI o1, and DeepSeek-R1 🍓☆3,000Updated last month
- Robust recipes to align language models with human and AI preferences☆5,145Updated 5 months ago
- Must-read Papers on LLM Agents.☆2,325Updated 2 months ago
- 800,000 step-level correctness labels on LLM solutions to MATH problems☆1,985Updated last year
- ☆2,778Updated 2 months ago
- Code for the paper Fine-Tuning Language Models from Human Preferences☆1,324Updated last year
- A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)☆1,114Updated last year
- Open Academic Research on Improving LLaMA to SOTA LLM☆1,618Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,514Updated last year
- An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO & GRPO & REINFORCE++ & LoRA & vLLM & RFT)☆6,457Updated this week
- Human preference data for "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback"☆1,728Updated last year
- Code and documents of LongLoRA and LongAlpaca (ICLR 2024 Oral)☆2,652Updated 8 months ago
- The official GitHub page for the survey paper "A Survey on Evaluation of Large Language Models".☆1,518Updated 3 weeks ago
- [ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.☆2,222Updated this week
- Code for our EMNLP 2023 Paper: "LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models"☆1,154Updated last year