ELS-RD / transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models π
β1,671Updated 2 months ago
Alternatives and similar repositories for transformer-deploy:
Users that are interested in transformer-deploy are comparing it to the libraries listed below
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,549Updated 11 months ago
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β2,667Updated this week
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.β1,503Updated last year
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β571Updated last year
- PyTorch extensions for high performance and large scale training.β3,232Updated this week
- Library for 8-bit optimizers and quantization routines.β717Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ781Updated last year
- FastFormers - highly efficient transformer models for NLUβ703Updated last year
- π€ Evaluate: A library for easily evaluating machine learning models and datasets.β2,082Updated last week
- Prune a model while finetuning or training.β397Updated 2 years ago
- Fast Inference Solutions for BLOOMβ563Updated 3 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β1,354Updated 9 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,016Updated 9 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUsβ¦β2,086Updated this week
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β982Updated 5 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β1,942Updated last month
- β411Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".β1,998Updated 9 months ago
- Tools to download and cleanup Common Crawl dataβ980Updated last year
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.β763Updated last month
- maximal update parametrization (Β΅P)β1,428Updated 6 months ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchβ857Updated last year
- Transformer related optimization, including BERT, GPTβ5,981Updated 9 months ago
- Cramming the training of a (BERT-type) language model into limited compute.β1,307Updated 7 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.β576Updated this week
- β2,723Updated this week
- Longformer: The Long-Document Transformerβ2,072Updated last year
- Foundation Architecture for (M)LLMsβ3,038Updated 9 months ago
- Running BERT without Paddingβ468Updated 2 years ago