Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models π
β1,688Oct 23, 2024Updated last year
Alternatives and similar repositories for transformer-deploy
Users that are interested in transformer-deploy are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,585Jan 28, 2026Updated last month
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β589Apr 24, 2023Updated 2 years ago
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,332Mar 13, 2026Updated last week
- Transformer related optimization, including BERT, GPTβ6,400Mar 27, 2024Updated last year
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.β10,446Updated this week
- LightSeq: A High Performance Library for Sequence Processing and Generationβ3,302May 16, 2023Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.β1,542Jul 18, 2025Updated 8 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,101Jun 30, 2025Updated 8 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (Nβ¦β4,709Mar 16, 2026Updated last week
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ791Apr 24, 2023Updated 2 years ago
- Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining theβ¦β2,088Aug 15, 2024Updated last year
- Accessible large language models via k-bit quantization for PyTorch.β8,052Mar 17, 2026Updated last week
- FastFormers - highly efficient transformer models for NLUβ709Mar 21, 2025Updated last year
- skweak: A software toolkit for weak supervision applied to NLP tasksβ926Sep 2, 2024Updated last year
- State-of-the-Art Text Embeddingsβ18,427Mar 12, 2026Updated last week
- NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformationsβ786May 19, 2024Updated last year
- Efficient few-shot learning with Sentence Transformersβ2,699Dec 11, 2025Updated 3 months ago
- Serve, optimize and scale PyTorch models in productionβ4,360Aug 6, 2025Updated 7 months ago
- Sparsity-aware deep learning inference runtime for CPUsβ3,163Jun 2, 2025Updated 9 months ago
- Large Language Model Text Generation Inferenceβ10,812Jan 8, 2026Updated 2 months ago
- Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasetsβ4,905Updated this week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,742Jan 8, 2024Updated 2 years ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,563Mar 17, 2026Updated last week
- OSLO: Open Source framework for Large-scale model Optimizationβ309Aug 25, 2022Updated 3 years ago
- PyTorch extensions for high performance and large scale training.β3,404Apr 26, 2025Updated 10 months ago
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRTβ2,959Updated this week
- A collection of libraries to optimise AI model performancesβ8,352Jul 22, 2024Updated last year
- Data augmentation for NLPβ4,652Jun 24, 2024Updated last year
- Library for 8-bit optimizers and quantization routines.β780Aug 18, 2022Updated 3 years ago
- Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.β1,750Dec 20, 2023Updated 2 years ago
- Fast inference engine for Transformer modelsβ4,368Feb 4, 2026Updated last month
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,804Updated this week
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ22,059Jan 23, 2026Updated 2 months ago
- Prune a model while finetuning or training.β406Jun 21, 2022Updated 3 years ago
- Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conveβ¦β4,235Aug 25, 2025Updated 6 months ago
- Fast and memory-efficient exact attentionβ22,938Updated this week
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/pβ¦β433Aug 17, 2022Updated 3 years ago
- Foundation Architecture for (M)LLMsβ3,137Apr 11, 2024Updated last year
- Running large language models on a single GPU for throughput-oriented scenarios.β9,379Oct 28, 2024Updated last year