tunib-ai / parallelformersLinks
Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
β787Updated 2 years ago
Alternatives and similar repositories for parallelformers
Users that are interested in parallelformers are comparing it to the libraries listed below
Sorting:
- OSLO: Open Source framework for Large-scale model Optimizationβ309Updated 2 years ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ609Updated 2 years ago
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β578Updated 2 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ866Updated last year
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/pβ¦β433Updated 2 years ago
- FastFormers - highly efficient transformer models for NLUβ705Updated 2 months ago
- Prune a model while finetuning or training.β402Updated 2 years ago
- Task-based datasets, preprocessing, and evaluation for sequence models.β574Updated 3 weeks ago
- β508Updated last year
- Library for 8-bit optimizers and quantization routines.β716Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)β530Updated last year
- Fast Inference Solutions for BLOOMβ564Updated 7 months ago
- Repository containing code for "How to Train BERT with an Academic Budget" paperβ313Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β1,000Updated 10 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,570Updated last year
- NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformationsβ786Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,687Updated 7 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,394Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeβ¦β437Updated last year
- Automatically split your PyTorch models on multiple GPUs for training & inferenceβ654Updated last year
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.oβ¦β604Updated 2 years ago
- β1,521Updated last month
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)β329Updated last year
- Tools to download and cleanup Common Crawl dataβ1,013Updated 2 years ago
- Large-scale language modeling tutorials with PyTorchβ289Updated 3 years ago
- Reproduce results and replicate training fo T0 (Multitask Prompted Training Enables Zero-Shot Task Generalization)β464Updated 2 years ago
- An open collection of implementation tips, tricks and resources for training large language modelsβ473Updated 2 years ago
- Fusion-in-Decoderβ570Updated last year
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraintβ393Updated last year
- β250Updated 10 months ago