tunib-ai / parallelformersLinks
Parallelformers: An Efficient Model Parallelization Toolkit for Deployment
β791Updated 2 years ago
Alternatives and similar repositories for parallelformers
Users that are interested in parallelformers are comparing it to the libraries listed below
Sorting:
- OSLO: Open Source framework for Large-scale model Optimizationβ309Updated 2 years ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ609Updated 2 years ago
- FastFormers - highly efficient transformer models for NLUβ705Updated 3 months ago
- Task-based datasets, preprocessing, and evaluation for sequence models.β583Updated 2 months ago
- Prune a model while finetuning or training.β403Updated 3 years ago
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β581Updated 2 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/pβ¦β433Updated 2 years ago
- β514Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ869Updated last year
- Repository containing code for "How to Train BERT with an Academic Budget" paperβ313Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β1,003Updated 11 months ago
- Library for 8-bit optimizers and quantization routines.β716Updated 2 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)β536Updated last year
- NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformationsβ786Updated last year
- Mistral: A strong, northwesterly wind: Framework for transparent and accessible large-scale language model training, built with Hugging Fβ¦β573Updated last year
- Transformers for Longer Sequencesβ617Updated 2 years ago
- Fast Inference Solutions for BLOOMβ564Updated 9 months ago
- Pretrain and finetune ELECTRA with fastai and huggingface. (Results of the paper replicated !)β330Updated last year
- Guide: Finetune GPT2-XL (1.5 Billion Parameters) and finetune GPT-NEO (2.7 B) on a single GPU with Huggingface Transformers using DeepSpeβ¦β437Updated 2 years ago
- Tools to download and cleanup Common Crawl dataβ1,017Updated 2 years ago
- β1,228Updated 11 months ago
- An open collection of implementation tips, tricks and resources for training large language modelsβ477Updated 2 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"β473Updated 3 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,575Updated last year
- [ACL 2021] Learning Dense Representations of Phrases at Scale; EMNLP'2021: Phrase Retrieval Learns Passage Retrieval, Too https://arxiv.oβ¦β604Updated 3 years ago
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraintβ396Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,688Updated 8 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,402Updated last year
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathwaysβ823Updated 2 years ago
- Automatically split your PyTorch models on multiple GPUs for training & inferenceβ656Updated last year