thomfoster / minRLHFLinks
A (somewhat) minimal library for finetuning language models with PPO on human feedback.
☆86Updated 2 years ago
Alternatives and similar repositories for minRLHF
Users that are interested in minRLHF are comparing it to the libraries listed below
Sorting:
- ☆98Updated 2 years ago
- RLHF implementation details of OAI's 2019 codebase☆190Updated last year
- ☆150Updated 10 months ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- Implementation of Reinforcement Learning from Human Feedback (RLHF)☆173Updated 2 years ago
- Simple next-token-prediction for RLHF☆227Updated 2 years ago
- A repository for transformer critique learning and generation☆90Updated last year
- ☆159Updated 2 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆209Updated 2 years ago
- This is the repo for the paper Shepherd -- A Critic for Language Model Generation☆218Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆65Updated 2 years ago
- DSIR large-scale data selection framework for language model training☆259Updated last year
- Self-Alignment with Principle-Following Reward Models☆166Updated 2 weeks ago
- Recurrent Memory Transformer☆150Updated 2 years ago
- ☆127Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆63Updated 5 months ago
- [ICLR 2024] COLLIE: Systematic Construction of Constrained Text Generation Tasks☆55Updated 2 years ago
- Scaling Data-Constrained Language Models☆342Updated 3 months ago
- ☆105Updated 2 months ago
- A minimum example of aligning language models with RLHF similar to ChatGPT☆221Updated 2 years ago
- Awesome Reinforcement Learning from Human Feedback, the secret behind ChatGPT XD☆23Updated 2 years ago
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Self-playing Adversarial Language Game Enhances LLM Reasoning, NeurIPS 2024☆139Updated 7 months ago
- Plug in and play implementation of " Textbooks Are All You Need", ready for training, inference, and dataset generation☆74Updated 2 years ago
- Mix of Minimal Optimal Sets (MMOS) of dataset has two advantages for two aspects, higher performance and lower construction costs on math…☆73Updated last year
- ☆280Updated 8 months ago
- ☆100Updated last year
- Pre-training code for Amber 7B LLM☆168Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- Evaluating LLMs with Dynamic Data☆95Updated 2 months ago