yifding / hetseqLinks
HetSeq: Distributed GPU Training on Heterogeneous Infrastructure
☆106Updated 2 years ago
Alternatives and similar repositories for hetseq
Users that are interested in hetseq are comparing it to the libraries listed below
Sorting:
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- ☆104Updated 4 years ago
- A diff tool for language models☆44Updated last year
- Functional deep learning☆108Updated 3 years ago
- A case study of efficient training of large language models using commodity hardware.☆68Updated 3 years ago
- Check if you have training samples in your test set☆64Updated 3 years ago
- GPT, but made only out of MLPs☆89Updated 4 years ago
- Trains Transformer model variants. Data isn't shuffled between batches.☆143Updated 3 years ago
- Code for scaling Transformers☆26Updated 5 years ago
- Python Research Framework☆106Updated 3 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 5 years ago
- The Python library with command line tools to interact with Dynabench(https://dynabench.org/), such as uploading models.☆55Updated 3 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆117Updated 3 years ago
- Implementation of Feedback Transformer in Pytorch☆108Updated 4 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- Implementation of the GBST block from the Charformer paper, in Pytorch☆119Updated 4 years ago
- http://nlp.seas.harvard.edu/2018/04/03/attention.html☆62Updated 4 years ago
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆189Updated 3 years ago
- A collection of code snippets for my PyTorch Lightning projects☆107Updated 4 years ago
- Docs☆143Updated last year
- Babysit your preemptible TPUs☆86Updated 3 years ago
- ☆65Updated 5 years ago
- Amos optimizer with JEstimator lib.☆82Updated last year
- ☆87Updated 3 years ago
- The official repository for our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers". We s…☆67Updated 2 years ago
- ☆153Updated 5 years ago
- A simple and working implementation of Electra, the fastest way to pretrain language models from scratch, in Pytorch☆235Updated 2 years ago
- ☆67Updated 3 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- LM Pretraining with PyTorch/TPU☆136Updated 6 years ago