lvwerra / rl-implementationsLinks
This repo contains a set of notebooks to reproduce reinforcement learning algorithms.
☆15Updated 2 years ago
Alternatives and similar repositories for rl-implementations
Users that are interested in rl-implementations are comparing it to the libraries listed below
Sorting:
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 2 years ago
- Official code from the paper "Offline RL for Natural Language Generation with Implicit Language Q Learning"☆209Updated 2 years ago
- A lightweight PyTorch implementation of the Transformer-XL architecture proposed by Dai et al. (2019)☆37Updated 2 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆188Updated 3 years ago
- Train very large language models in Jax.☆209Updated last year
- Code accompanying the paper Pretraining Language Models with Human Preferences☆180Updated last year
- Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale, TACL (2022)☆130Updated 3 months ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- Evaluation suite for large-scale language models.☆128Updated 4 years ago
- Official repository for the paper "Can You Learn an Algorithm? Generalizing from Easy to Hard Problems with Recurrent Networks"☆59Updated 3 years ago
- Functional local implementations of main model parallelism approaches☆96Updated 2 years ago
- Python library which enables complex compositions of language models such as scratchpads, chain of thought, tool use, selection-inference…☆210Updated 3 months ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆143Updated 2 years ago
- 🎢 Creating and sharing simulation environments for embodied and synthetic data research☆191Updated last year
- A library to create and manage configuration files, especially for machine learning projects.☆79Updated 3 years ago
- Experiments on GPT-3's ability to fit numerical models in-context.☆14Updated 3 years ago
- Implementation of a Transformer that Ponders, using the scheme from the PonderNet paper☆81Updated 3 years ago
- Official repository for the paper "Going Beyond Linear Transformers with Recurrent Fast Weight Programmers" (NeurIPS 2021)☆50Updated 3 months ago
- ☆35Updated 2 years ago
- Inference code for LLaMA models in JAX☆120Updated last year
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- ☆101Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.☆168Updated 2 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- [NeurIPS 2023] Learning Transformer Programs☆162Updated last year
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆137Updated last year
- ☆31Updated 3 years ago
- ☆67Updated 3 years ago
- Supplementary Data for Evolving Reinforcement Learning Algorithms☆46Updated 4 years ago
- One stop shop for all things carp☆59Updated 3 years ago