thomfoster / minRLHFLinks
A (somewhat) minimal library for finetuning language models with PPO on human feedback.
☆87Updated 2 years ago
Alternatives and similar repositories for minRLHF
Users that are interested in minRLHF are comparing it to the libraries listed below
Sorting:
- ☆98Updated 2 years ago
- RLHF implementation details of OAI's 2019 codebase☆194Updated last year
- ☆154Updated 11 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- Simple next-token-prediction for RLHF☆226Updated 2 years ago
- ☆159Updated 2 years ago
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆210Updated 2 years ago
- A repository for transformer critique learning and generation☆89Updated last year
- Self-Alignment with Principle-Following Reward Models☆169Updated last month
- Pre-training code for Amber 7B LLM☆169Updated last year
- Scaling Data-Constrained Language Models☆342Updated 4 months ago
- DSIR large-scale data selection framework for language model training☆265Updated last year
- ☆245Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆217Updated 2 years ago
- A minimum example of aligning language models with RLHF similar to ChatGPT☆224Updated 2 years ago
- ☆100Updated last year
- ☆129Updated last year
- RL algorithm: Advantage induced policy alignment☆65Updated 2 years ago
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆73Updated 2 years ago
- Code for ACL2024 paper - Adversarial Preference Optimization (APO).☆57Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- An open-source implementation of Scaling Laws for Neural Language Models using nanoGPT☆49Updated last year
- "Improving Mathematical Reasoning with Process Supervision" by OPENAI☆111Updated 3 weeks ago
- Mix of Minimal Optimal Sets (MMOS) of dataset has two advantages for two aspects, higher performance and lower construction costs on math…☆73Updated last year
- Recurrent Memory Transformer☆152Updated 2 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆64Updated 6 months ago
- Multipack distributed sampler for fast padding-free training of LLMs☆201Updated last year
- ☆280Updated 10 months ago