shawwn / tpunicornLinks
Babysit your preemptible TPUs
☆87Updated 3 years ago
Alternatives and similar repositories for tpunicorn
Users that are interested in tpunicorn are comparing it to the libraries listed below
Sorting:
- HomebrewNLP in JAX flavour for maintable TPU-Training☆51Updated 2 years ago
- ☆63Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- HetSeq: Distributed GPU Training on Heterogeneous Infrastructure☆106Updated 2 years ago
- Python Research Framework☆107Updated 3 years ago
- ☆131Updated 3 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆117Updated 4 years ago
- One stop shop for all things carp☆59Updated 3 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆242Updated 2 years ago
- Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP☆58Updated 3 years ago
- ☆78Updated 2 years ago
- Experiments with generating opensource language model assistants☆97Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- ☆66Updated 3 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆190Updated 3 years ago
- ☆40Updated 3 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- Training and evaluation code for the paper "Headless Language Models: Learning without Predicting with Contrastive Weight Tying" (https:/…☆28Updated last year
- JAX implementation of VQGAN☆91Updated 3 years ago
- Simple Annotated implementation of GPT-NeoX in PyTorch☆110Updated 3 years ago
- A package for fine-tuning Transformers with TPUs, written in Tensorflow2.0+☆37Updated 4 years ago
- [WIP] A 🔥 interface for running code in the cloud☆86Updated 2 years ago
- ☆67Updated 3 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆237Updated 2 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆157Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆96Updated 2 years ago
- ☆94Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆112Updated 2 years ago
- Hidden Engrams: Long Term Memory for Transformer Model Inference☆35Updated 4 years ago