shawwn / tpunicorn
Babysit your preemptible TPUs
☆86Updated 2 years ago
Alternatives and similar repositories for tpunicorn:
Users that are interested in tpunicorn are comparing it to the libraries listed below
- HomebrewNLP in JAX flavour for maintable TPU-Training☆48Updated last year
- ☆39Updated 2 years ago
- Python Research Framework☆106Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 2 years ago
- ☆58Updated 3 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- ☆67Updated 2 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 2 years ago
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated last year
- One stop shop for all things carp☆59Updated 2 years ago
- Training a model similar to OpenAI DALL-E with volunteers from all over the Internet using hivemind and dalle-pytorch (NeurIPS 2021 demo)☆26Updated last year
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆236Updated last year
- ☆67Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆146Updated 3 years ago
- Your fruity companion for transformers☆14Updated 2 years ago
- ☆128Updated 2 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆26Updated 10 months ago
- Experiments with generating opensource language model assistants☆97Updated last year
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training☆26Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆116Updated 3 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+☆37Updated 4 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 3 years ago
- Pipeline for pulling and processing online language model pretraining data from the web☆175Updated last year
- Code for scaling Transformers☆26Updated 4 years ago
- A library for squeakily cleaning and filtering language datasets.☆46Updated last year
- Automatically take good care of your preemptible TPUs☆36Updated last year
- gpu tester detects broken and slow gpus in a cluster☆68Updated 2 years ago
- Contrastive Language-Image Pretraining☆142Updated 2 years ago