yifding / hetseq
HetSeq: Distributed GPU Training on Heterogeneous Infrastructure
☆106Updated last year
Alternatives and similar repositories for hetseq:
Users that are interested in hetseq are comparing it to the libraries listed below
- A case study of efficient training of large language models using commodity hardware.☆69Updated 2 years ago
- Check if you have training samples in your test set☆64Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆116Updated 3 years ago
- ☆153Updated 4 years ago
- Functional deep learning☆108Updated 2 years ago
- Babysit your preemptible TPUs☆85Updated 2 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 3 years ago
- GPT, but made only out of MLPs☆88Updated 3 years ago
- My implementation of DeepMind's Perceiver☆63Updated 4 years ago
- Implementation of Feedback Transformer in Pytorch☆105Updated 4 years ago
- ☆103Updated 4 years ago
- A collection of code snippets for my PyTorch Lightning projects☆107Updated 4 years ago
- TPU index is a package for fast similarity search over large collections of high dimension vectors on TPUs☆17Updated 3 years ago
- A collection of Models, Datasets, DataModules, Callbacks, Metrics, Losses and Loggers to better integrate pytorch-lightning with transfor…☆47Updated last year
- Python Research Framework☆106Updated 2 years ago
- A 🤗-style implementation of BERT using lambda layers instead of self-attention☆69Updated 4 years ago
- This repository contains example code to build models on TPUs☆30Updated 2 years ago
- HomebrewNLP in JAX flavour for maintable TPU-Training☆50Updated last year
- A library to create and manage configuration files, especially for machine learning projects.☆78Updated 3 years ago
- Swarm training framework using Haiku + JAX + Ray for layer parallel transformer language models on unreliable, heterogeneous nodes☆238Updated last year
- A queue service for quickly developing scripts that use all your GPUs efficiently☆83Updated 2 years ago
- Training Transformer-XL on 128 GPUs☆140Updated 4 years ago
- ☆151Updated 2 years ago
- Repo for training MLMs, CLMs, or T5-type models on the OLM pretraining data, but it should work with any hugging face text dataset.☆93Updated 2 years ago
- ☆39Updated 2 years ago
- Framework-agnostic library for checking array/tensor shapes at runtime.☆46Updated 4 years ago
- Manifold-Mixup implementation for fastai V1☆19Updated 4 years ago
- Visualising the Transformer encoder☆111Updated 4 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆313Updated last year
- A diff tool for language models☆42Updated last year