ELS-RD / transformer-deployLinks
Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models π
β1,687Updated 9 months ago
Alternatives and similar repositories for transformer-deploy
Users that are interested in transformer-deploy are comparing it to the libraries listed below
Sorting:
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,581Updated last year
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β586Updated 2 years ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β1,007Updated last year
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ791Updated 2 years ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.β816Updated last week
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.β1,530Updated last month
- Library for 8-bit optimizers and quantization routines.β774Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,409Updated last year
- PyTorch extensions for high performance and large scale training.β3,361Updated 3 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,046Updated last month
- FastFormers - highly efficient transformer models for NLUβ707Updated 5 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.β631Updated last week
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,023Updated this week
- Fast Inference Solutions for BLOOMβ564Updated 10 months ago
- β412Updated last year
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,295Updated last week
- Prune a model while finetuning or training.β403Updated 3 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,057Updated last year
- Boosting your Web Services of Deep Learning Applications.β1,242Updated 4 years ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Servβ¦β484Updated 2 weeks ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ610Updated 2 years ago
- Automatically create Faiss knn indices with the most optimal similarity search parameters.β868Updated last year
- β1,235Updated last year
- This repository contains tutorials and examples for Triton Inference Serverβ758Updated 2 weeks ago
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.β1,217Updated this week
- The Triton TensorRT-LLM Backendβ881Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.β641Updated last week
- Tools to download and cleanup Common Crawl dataβ1,024Updated 2 years ago
- Cramming the training of a (BERT-type) language model into limited compute.β1,345Updated last year
- Transformer related optimization, including BERT, GPTβ6,274Updated last year