augustwester / transformer-xl
A lightweight PyTorch implementation of the Transformer-XL architecture proposed by Dai et al. (2019)
☆37Updated 2 years ago
Alternatives and similar repositories for transformer-xl:
Users that are interested in transformer-xl are comparing it to the libraries listed below
- ☆53Updated last year
- ☆79Updated 11 months ago
- Minimal but scalable implementation of large language models in JAX☆34Updated 4 months ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆49Updated last year
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- Repository for the code of the "PPL-MCTS: Constrained Textual Generation Through Discriminator-Guided Decoding" paper, NAACL'22☆64Updated 2 years ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- JAX/Flax implementation of the Hyena Hierarchy☆34Updated last year
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- Code for the paper "The Impact of Positional Encoding on Length Generalization in Transformers", NeurIPS 2023☆131Updated 11 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 9 months ago
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 9 months ago
- Implementation of Gradient Agreement Filtering, from Chaubard et al. of Stanford, but for single machine microbatches, in Pytorch☆23Updated 2 months ago
- Implementation of Token Shift GPT - An autoregressive model that solely relies on shifting the sequence space for mixing☆48Updated 3 years ago
- ☆33Updated 6 months ago
- Learn online intrinsic rewards from LLM feedback☆35Updated 3 months ago
- Train very large language models in Jax.☆203Updated last year
- Language models scale reliably with over-training and on downstream tasks☆96Updated 11 months ago
- ☆76Updated 8 months ago
- ☆60Updated 3 years ago
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆72Updated 7 months ago
- Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models☆54Updated last month
- LayerNorm(SmallInit(Embedding)) in a Transformer to improve convergence☆60Updated 3 years ago
- QAmeleon introduces synthetic multilingual QA data using PaLM, a 540B large language model. This dataset was generated by prompt tuning P…☆34Updated last year
- Transformer with Mu-Parameterization, implemented in Jax/Flax. Supports FSDP on TPU pods.☆30Updated 3 months ago
- ☆49Updated last year
- Exploring an idea where one forgets about efficiency and carries out attention across each edge of the nodes (tokens)☆47Updated this week
- Code for the paper "Function-Space Learning Rates"☆17Updated last month
- ☆34Updated 2 years ago
- TPU pod commander is a package for managing and launching jobs on Google Cloud TPU pods.☆20Updated 9 months ago