pytorch-tpu / examplesLinks
This repository contains example code to build models on TPUs
☆30Updated 2 years ago
Alternatives and similar repositories for examples
Users that are interested in examples are comparing it to the libraries listed below
Sorting:
- A queue service for quickly developing scripts that use all your GPUs efficiently☆88Updated 3 years ago
- LM Pretraining with PyTorch/TPU☆136Updated 5 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transfor…☆47Updated 2 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Code for scaling Transformers☆26Updated 4 years ago
- Configure Python functions explicitly and safely☆127Updated 10 months ago
- Simple tooling for marking deprecated functions or classes and re-routing to the new successors' instance.☆50Updated 2 months ago
- Helper scripts and notes that were used while porting various nlp models☆47Updated 3 years ago
- A deep learning library based on Pytorch focussed on low resource language research and robustness☆70Updated 3 years ago
- ☆104Updated 4 years ago
- TPU support for the fastai library☆13Updated 4 years ago
- Babysit your preemptible TPUs☆86Updated 2 years ago
- Standalone pre-training recipe with JAX+Flax☆32Updated 2 years ago
- ☆64Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated 2 years ago
- ☆87Updated 3 years ago
- A diff tool for language models☆44Updated last year
- Code repo for "Transformer on a Diet" paper☆31Updated 5 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- Python Research Framework☆106Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- Pylint plugin to for PyTorch Tensor Annotations / Operations☆20Updated 6 years ago
- ☆31Updated 3 months ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Training Transformer-XL on 128 GPUs☆140Updated 5 years ago
- ☆153Updated 5 years ago