ELS-RD / transformer-deployLinks
Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models π
β1,689Updated 11 months ago
Alternatives and similar repositories for transformer-deploy
Users that are interested in transformer-deploy are comparing it to the libraries listed below
Sorting:
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,586Updated last year
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β1,005Updated last year
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β587Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.β1,532Updated 3 months ago
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,115Updated last week
- Library for 8-bit optimizers and quantization routines.β779Updated 3 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ790Updated 2 years ago
- PyTorch extensions for high performance and large scale training.β3,380Updated 5 months ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.β823Updated 2 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,422Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,067Updated 3 months ago
- FastFormers - highly efficient transformer models for NLUβ707Updated 7 months ago
- Fast Inference Solutions for BLOOMβ565Updated last year
- Prune a model while finetuning or training.β405Updated 3 years ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.β648Updated last week
- β413Updated last year
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,062Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Servβ¦β494Updated last week
- Cramming the training of a (BERT-type) language model into limited compute.β1,348Updated last year
- Transformer related optimization, including BERT, GPTβ6,326Updated last year
- Tools to download and cleanup Common Crawl dataβ1,031Updated 2 years ago
- Automatically create Faiss knn indices with the most optimal similarity search parameters.β872Updated last year
- Flexible components pairing π€ Transformers with Pytorch Lightningβ612Updated 2 years ago
- Automatically split your PyTorch models on multiple GPUs for training & inferenceβ657Updated last year
- Boosting your Web Services of Deep Learning Applications.β1,245Updated 4 years ago
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,340Updated 3 weeks ago
- SGPT: GPT Sentence Embeddings for Semantic Searchβ875Updated last year
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.β652Updated last week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".β2,202Updated last year
- maximal update parametrization (Β΅P)β1,611Updated last year