ELS-RD / transformer-deployLinks
Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models π
β1,687Updated 8 months ago
Alternatives and similar repositories for transformer-deploy
Users that are interested in transformer-deploy are comparing it to the libraries listed below
Sorting:
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,575Updated last year
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β581Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.β1,526Updated 3 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β1,001Updated 11 months ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ790Updated 2 years ago
- Library for 8-bit optimizers and quantization routines.β716Updated 2 years ago
- FastFormers - highly efficient transformer models for NLUβ705Updated 3 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,026Updated last week
- PyTorch extensions for high performance and large scale training.β3,337Updated 2 months ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.β804Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.β622Updated this week
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,967Updated last week
- β411Updated last year
- Fast Inference Solutions for BLOOMβ564Updated 9 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,401Updated last year
- Prune a model while finetuning or training.β403Updated 3 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,053Updated last year
- Automatically create Faiss knn indices with the most optimal similarity search parameters.β862Updated last year
- Boosting your Web Services of Deep Learning Applications.β1,241Updated 4 years ago
- Cramming the training of a (BERT-type) language model into limited compute.β1,338Updated last year
- Flexible components pairing π€ Transformers with Pytorch Lightningβ609Updated 2 years ago
- Sparsity-aware deep learning inference runtime for CPUsβ3,157Updated last month
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Servβ¦β479Updated last month
- Transformer related optimization, including BERT, GPTβ6,231Updated last year
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ868Updated last year
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,254Updated 3 weeks ago
- The Triton TensorRT-LLM Backendβ859Updated this week
- This repository contains tutorials and examples for Triton Inference Serverβ732Updated last month
- β1,227Updated 11 months ago
- Automatically split your PyTorch models on multiple GPUs for training & inferenceβ656Updated last year