microsoft / infinibatchLinks
Efficient, check-pointed data loading for deep learning with massive data sets.
☆209Updated 2 years ago
Alternatives and similar repositories for infinibatch
Users that are interested in infinibatch are comparing it to the libraries listed below
Sorting:
- Official Pytorch Implementation of Length-Adaptive Transformer (ACL 2021)☆102Updated 4 years ago
- Understanding the Difficulty of Training Transformers☆330Updated 3 years ago
- Transformers without Tears: Improving the Normalization of Self-Attention☆133Updated last year
- Code for the ICML'20 paper "Improving Transformer Optimization Through Better Initialization"☆89Updated 4 years ago
- FairSeq repo with Apollo optimizer☆114Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆315Updated 2 years ago
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!☆113Updated 2 years ago
- Training Transformer-XL on 128 GPUs☆141Updated 5 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆432Updated 3 years ago
- LM Pretraining with PyTorch/TPU☆136Updated 6 years ago
- ☆219Updated 5 years ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 4 years ago
- Code for the paper "Are Sixteen Heads Really Better than One?"☆172Updated 5 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆62Updated last month
- ☆48Updated 5 years ago
- LaNMT: Latent-variable Non-autoregressive Neural Machine Translation with Deterministic Inference☆79Updated 4 years ago
- Cascaded Text Generation with Markov Transformers☆129Updated 2 years ago
- Fast Block Sparse Matrices for Pytorch☆547Updated 4 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference☆160Updated 3 years ago
- Implementation of a Transformer, but completely in Triton☆276Updated 3 years ago
- On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines☆137Updated 2 years ago
- ☆64Updated 5 years ago
- XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale☆156Updated last year
- Profile the GPU memory usage of every line in a Pytorch code☆83Updated 7 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆251Updated 2 years ago
- ☆249Updated 5 years ago
- Code for the Shortformer model, from the ACL 2021 paper by Ofir Press, Noah A. Smith and Mike Lewis.☆147Updated 4 years ago
- Hyperparameter Search for AllenNLP☆140Updated 7 months ago
- terashuf shuffles multi-terabyte text files using limited memory☆226Updated 2 years ago
- This repository contains the code for running the character-level Sandwich Transformers from our ACL 2020 paper on Improving Transformer …☆55Updated 4 years ago