ELS-RD / transformer-deploy
Efficient, scalable and enterprise-grade CPU/GPU inference server for ๐ค Hugging Face transformer models ๐
โ1,654Updated 2 weeks ago
Related projects โ
Alternatives and complementary repositories for transformer-deploy
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabโฆโ1,532Updated 8 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.โ1,891Updated this week
- โก boost inference speed of T5 models by 5x & reduce the model size by 3x.โ565Updated last year
- ๐ Accelerate training and inference of ๐ค Transformers and ๐ค Diffusers with easy to use hardware optimization toolsโ2,559Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2โ1,335Updated 7 months ago
- PyTorch extensions for high performance and large scale training.โ3,187Updated 2 months ago
- ๐ค Evaluate: A library for easily evaluating machine learning models and datasets.โ2,029Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUsโฆโ1,936Updated this week
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.โ980Updated 3 months ago
- Library for 8-bit optimizers and quantization routines.โ714Updated 2 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentโ779Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".โ1,923Updated 7 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.โ1,010Updated 6 months ago
- Transformer related optimization, including BERT, GPTโ5,871Updated 7 months ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.โ739Updated this week
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.โ1,483Updated last year
- A modular RL library to fine-tune language models to human preferencesโ2,211Updated 8 months ago
- Prune a model while finetuning or training.โ394Updated 2 years ago
- FastFormers - highly efficient transformer models for NLUโ701Updated 9 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2โ1,885Updated 3 weeks ago
- Fast Inference Solutions for BLOOMโ560Updated last month
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Modelsโ1,242Updated 3 months ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorchโ851Updated last year
- A high-performance Python-based I/O system for large (and small) deep learning problems, with strong support for PyTorch.โ2,303Updated last month
- The Triton TensorRT-LLM Backendโ703Updated this week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.โ544Updated this week
- โ411Updated 11 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRTโ2,585Updated this week
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.โ1,131Updated this week
- maximal update parametrization (ยตP)โ1,398Updated 3 months ago