Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models π
β1,688Oct 23, 2024Updated last year
Alternatives and similar repositories for transformer-deploy
Users that are interested in transformer-deploy are comparing it to the libraries listed below
Sorting:
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,585Jan 28, 2026Updated last month
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β589Apr 24, 2023Updated 2 years ago
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,305Feb 9, 2026Updated 3 weeks ago
- Transformer related optimization, including BERT, GPTβ6,398Mar 27, 2024Updated last year
- LightSeq: A High Performance Library for Sequence Processing and Generationβ3,303May 16, 2023Updated 2 years ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.β10,393Updated this week
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.β1,544Jul 18, 2025Updated 7 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,097Jun 30, 2025Updated 8 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (Nβ¦β4,706Jan 12, 2026Updated last month
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ791Apr 24, 2023Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.β7,997Updated this week
- Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining theβ¦β2,083Aug 15, 2024Updated last year
- FastFormers - highly efficient transformer models for NLUβ709Mar 21, 2025Updated 11 months ago
- Sparsity-aware deep learning inference runtime for CPUsβ3,159Jun 2, 2025Updated 9 months ago
- skweak: A software toolkit for weak supervision applied to NLP tasksβ926Sep 2, 2024Updated last year
- Efficient few-shot learning with Sentence Transformersβ2,688Dec 11, 2025Updated 2 months ago
- State-of-the-Art Text Embeddingsβ18,323Updated this week
- NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformationsβ786May 19, 2024Updated last year
- Serve, optimize and scale PyTorch models in productionβ4,359Aug 6, 2025Updated 6 months ago
- OSLO: Open Source framework for Large-scale model Optimizationβ309Aug 25, 2022Updated 3 years ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,513Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,738Jan 8, 2024Updated 2 years ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,801Updated this week
- PyTorch extensions for high performance and large scale training.β3,400Apr 26, 2025Updated 10 months ago
- Large Language Model Text Generation Inferenceβ10,788Jan 8, 2026Updated last month
- Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasetsβ4,884Updated this week
- Data augmentation for NLPβ4,645Jun 24, 2024Updated last year
- Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conveβ¦β4,231Aug 25, 2025Updated 6 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRTβ2,948Updated this week
- A collection of libraries to optimise AI model performancesβ8,354Jul 22, 2024Updated last year
- Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.β1,752Dec 20, 2023Updated 2 years ago
- Foundation Architecture for (M)LLMsβ3,135Apr 11, 2024Updated last year
- Fast and memory-efficient exact attentionβ22,460Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ22,030Jan 23, 2026Updated last month
- Library for 8-bit optimizers and quantization routines.β780Aug 18, 2022Updated 3 years ago
- Prune a model while finetuning or training.β406Jun 21, 2022Updated 3 years ago
- Leveraging BERT and c-TF-IDF to create easily interpretable topics.β7,426Feb 20, 2026Updated last week
- Running large language models on a single GPU for throughput-oriented scenarios.β9,382Oct 28, 2024Updated last year
- Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.β1,829Jan 18, 2026Updated last month