Sea-Snell / Implicit-Language-Q-LearningLinks
Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"
☆211Updated 2 years ago
Alternatives and similar repositories for Implicit-Language-Q-Learning
Users that are interested in Implicit-Language-Q-Learning are comparing it to the libraries listed below
Sorting:
- ☆158Updated 2 years ago
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- ☆110Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- A repository for transformer critique learning and generation☆89Updated 2 years ago
- Simple next-token-prediction for RLHF☆228Updated 2 years ago
- RL algorithm: Advantage induced policy alignment☆66Updated 2 years ago
- ☆220Updated 2 years ago
- Code for the paper "VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment"☆185Updated 8 months ago
- ☆216Updated 2 years ago
- ☆86Updated last year
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 3 years ago
- Code for Contrastive Preference Learning (CPL)☆178Updated last year
- RLHF implementation details of OAI's 2019 codebase☆197Updated 2 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆216Updated 3 weeks ago
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆66Updated 3 years ago
- ☆128Updated 2 years ago
- ☆144Updated 6 months ago
- ☆185Updated 2 years ago
- ☆39Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- SmartPlay is a benchmark for Large Language Models (LLMs). Uses a variety of games to test various important LLM capabilities as agents. …☆145Updated last year
- ☆119Updated last year
- ☆35Updated 3 years ago
- A (somewhat) minimal library for finetuning language models with PPO on human feedback.☆90Updated 3 years ago
- Train very large language models in Jax.☆210Updated 2 years ago
- ☆84Updated 2 years ago
- ☆160Updated last year
- A reinforcement learning environment for the IGLU 2022 at NeurIPS☆35Updated 2 years ago
- ☆98Updated 2 years ago