tunib-ai / parallelformersLinks
Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
β790Updated 2 years ago
Alternatives and similar repositories for parallelformers
Users that are interested in parallelformers are comparing it to the libraries listed below
Sorting:
- OSLO: Open Source framework for Large-scale model Optimizationβ309Updated 3 years ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ612Updated 2 years ago
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β587Updated 2 years ago
- FastFormers - highly efficient transformer models for NLUβ706Updated 7 months ago
- Prune a model while finetuning or training.β405Updated 3 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/pβ¦β432Updated 3 years ago
- Task-based datasets, preprocessing, and evaluation for sequence models.β587Updated this week
- β524Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β1,006Updated last year
- Library for 8-bit optimizers and quantization routines.β780Updated 3 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ874Updated 2 years ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)β330Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,690Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paperβ315Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)β544Updated 2 years ago
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeβ¦β436Updated 2 years ago
- An open collection of implementation tips, tricks and resources for training large language modelsβ482Updated 2 years ago
- Tools to download and cleanup Common Crawl dataβ1,031Updated 2 years ago
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Fβ¦β574Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,586Updated last year
- Fast Inference Solutions for BLOOMβ565Updated last year
- NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformationsβ788Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathwaysβ824Updated 2 years ago
- Transformers for Longer Sequencesβ620Updated 3 years ago
- Automatically split your PyTorch models on multiple GPUs for training & inferenceβ658Updated last year
- β1,250Updated last year
- BLEURT is a metric for Natural Language Generation based on transfer learning.β765Updated 2 years ago
- β413Updated last year
- A minimal PyTorch Lightning OpenAI GPT w DeepSpeed Training!β113Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,423Updated last year