vwxyzjn / lm-human-preference-detailsLinks
RLHF implementation details of OAI's 2019 codebase
☆187Updated last year
Alternatives and similar repositories for lm-human-preference-details
Users that are interested in lm-human-preference-details are comparing it to the libraries listed below
Sorting:
- ☆147Updated 8 months ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆86Updated 2 years ago
- Code for Paper (ReMax: A Simple, Efficient and Effective Reinforcement Learning Method for Aligning Large Language Models)☆189Updated last year
- Pytorch implementation of DoReMi, a method for optimizing the data mixture weights in language modeling datasets☆340Updated last year
- A large-scale, fine-grained, diverse preference dataset (and models).☆345Updated last year
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆167Updated 2 months ago
- (ICML 2024) Alphazero-like Tree-Search can guide large language model decoding and training☆278Updated last year
- ☆278Updated 7 months ago
- This is the repository that contains the source code for the Self-Evaluation Guided MCTS for online DPO.☆319Updated last year
- ☆337Updated 2 months ago
- Codes and Data for Scaling Relationship on Learning Mathematical Reasoning with Large Language Models☆268Updated 10 months ago
- Simple next-token-prediction for RLHF☆227Updated last year
- Research Code for "ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL"☆185Updated 3 months ago
- ☆96Updated 2 years ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆56Updated last year
- DSIR large-scale data selection framework for language model training☆258Updated last year
- RewardBench: the first evaluation tool for reward models.☆622Updated last month
- Reference implementation for Token-level Direct Preference Optimization(TDPO)☆146Updated 5 months ago
- LLaMA-TRL: Fine-tuning LLaMA with PPO and LoRA☆220Updated 2 years ago
- ☆159Updated 2 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆208Updated 2 years ago
- ☆240Updated 2 years ago
- Scaling Data-Constrained Language Models☆338Updated last month
- Direct Preference Optimization from scratch in PyTorch☆103Updated 4 months ago
- Reproducible, flexible LLM evaluations☆227Updated 3 weeks ago
- Explore what LLMs are really leanring over SFT☆28Updated last year
- A simple toolkit for benchmarking LLMs on mathematical reasoning tasks. 🧮✨☆239Updated last year
- 🌾 OAT: A research-friendly framework for LLM online alignment, including reinforcement learning, preference learning, etc.☆425Updated last week
- ☆99Updated last year
- Rectified Rotary Position Embeddings☆378Updated last year