augustwester / transformer-xl
A lightweight PyTorch implementation of the Transformer-XL architecture proposed by Dai et al. (2019)
☆37Updated 2 years ago
Alternatives and similar repositories for transformer-xl:
Users that are interested in transformer-xl are comparing it to the libraries listed below
- Yet another random morning idea to be quickly tried and architecture shared if it works; to allow the transformer to pause for any amount…☆53Updated last year
- ☆53Updated last year
- Implementation of Direct Preference Optimization☆15Updated last year
- ☆43Updated last year
- ☆61Updated last year
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆37Updated last year
- ☆78Updated 9 months ago
- Minimal but scalable implementation of large language models in JAX☆34Updated 5 months ago
- Implementation of GateLoop Transformer in Pytorch and Jax☆87Updated 10 months ago
- A MAD laboratory to improve AI architecture designs 🧪☆111Updated 4 months ago
- A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks☆32Updated 5 months ago
- some common Huggingface transformers in maximal update parametrization (µP)☆80Updated 3 years ago
- [ICML 2024] Official code release accompanying the paper "diff History for Neural Language Agents" (Piterbarg, Pinto, Fergus)☆20Updated 8 months ago
- Official code for the paper "Context-Aware Language Modeling for Goal-Oriented Dialogue Systems"☆34Updated 2 years ago
- Large scale 4D parallelism pre-training for 🤗 transformers in Mixture of Experts *(still work in progress)*☆82Updated last year
- ☆92Updated last year
- Code and Configs for Asynchronous RLHF: Faster and More Efficient RL for Language Models☆45Updated last month
- Official Implementation of NeurIPS'23 Paper "Cross-Episodic Curriculum for Transformer Agents"☆31Updated last year
- HomebrewNLP in JAX flavour for maintable TPU-Training☆49Updated last year
- Train very large language models in Jax.☆204Updated last year
- Intelligent Go-Explore: Standing on the Shoulders of Giant Foundation Models☆54Updated 2 months ago
- JAX notebook showing how to LoRA + GPTQ arbitrary models☆9Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated 10 months ago
- ☆27Updated 9 months ago
- ☆60Updated 3 years ago
- ☆49Updated last year
- ☆34Updated 2 years ago
- ☆33Updated 7 months ago
- Minimal (400 LOC) implementation Maximum (multi-node, FSDP) GPT training☆123Updated last year
- Exploring finetuning public checkpoints on filter 8K sequences on Pile☆115Updated 2 years ago