Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models π
β1,687Oct 23, 2024Updated last year
Alternatives and similar repositories for transformer-deploy
Users that are interested in transformer-deploy are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,585Jan 28, 2026Updated 3 months ago
- β‘ boost inference speed of T5 models by 5x & reduce the model size by 3x.β588Apr 24, 2023Updated 3 years ago
- π Accelerate inference and training of π€ Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimizationβ¦β3,376Updated this week
- Transformer related optimization, including BERT, GPTβ6,415Mar 27, 2024Updated 2 years ago
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.β10,625Updated this week
- Managed hosting for WordPress and PHP on Cloudways β’ AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- LightSeq: A High Performance Library for Sequence Processing and Generationβ3,300May 16, 2023Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.β1,547Jul 18, 2025Updated 9 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.β2,110Jun 30, 2025Updated 10 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (Nβ¦β4,718Apr 9, 2026Updated 3 weeks ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ789Apr 24, 2023Updated 3 years ago
- Explain, analyze, and visualize NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining theβ¦β2,096Aug 15, 2024Updated last year
- FastFormers - highly efficient transformer models for NLUβ709Mar 21, 2025Updated last year
- Accessible large language models via k-bit quantization for PyTorch.β8,168Apr 20, 2026Updated 2 weeks ago
- skweak: A software toolkit for weak supervision applied to NLP tasksβ926Sep 2, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer β’ AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- State-of-the-Art Text Embeddingsβ18,615Updated this week
- NL-Augmenter π¦ β π A Collaborative Repository of Natural Language Transformationsβ787May 19, 2024Updated last year
- Efficient few-shot learning with Sentence Transformersβ2,724Apr 17, 2026Updated 2 weeks ago
- Serve, optimize and scale PyTorch models in productionβ4,360Aug 6, 2025Updated 8 months ago
- Sparsity-aware deep learning inference runtime for CPUsβ3,162Jun 2, 2025Updated 11 months ago
- Large Language Model Text Generation Inferenceβ10,848Mar 21, 2026Updated last month
- Argilla is a collaboration tool for AI engineers and domain experts to build high-quality datasetsβ4,954Apr 27, 2026Updated last week
- A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)β4,745Jan 8, 2024Updated 2 years ago
- π A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (iβ¦β9,639Updated this week
- Virtual machines for every use case on DigitalOcean β’ AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- OSLO: Open Source framework for Large-scale model Optimizationβ309Aug 25, 2022Updated 3 years ago
- PyTorch extensions for high performance and large scale training.β3,409Apr 26, 2025Updated last year
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRTβ2,966Updated this week
- A collection of libraries to optimise AI model performancesβ8,349Jul 22, 2024Updated last year
- Data augmentation for NLPβ4,656Jun 24, 2024Updated last year
- Library for 8-bit optimizers and quantization routines.β779Aug 18, 2022Updated 3 years ago
- Fast & easy transfer learning for NLP. Harvesting language models for the industry. Focus on Question Answering.β1,754Dec 20, 2023Updated 2 years ago
- Fast inference engine for Transformer modelsβ4,457Feb 4, 2026Updated 3 months ago
- A Unified Library for Parameter-Efficient and Modular Transfer Learningβ2,812Apr 26, 2026Updated last week
- Proton VPN Special Offer - Get 70% off β’ AdSpecial partner offer. Trusted by over 100 million users worldwide. Tested, Approved and Recommended by Experts.
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalitiesβ22,114Jan 23, 2026Updated 3 months ago
- Prune a model while finetuning or training.β406Jun 21, 2022Updated 3 years ago
- Transformers for Information Retrieval, Text Classification, NER, QA, Language Modelling, Language Generation, T5, Multi-Modal, and Conveβ¦β4,239Aug 25, 2025Updated 8 months ago
- Fast and memory-efficient exact attentionβ23,628Updated this week
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/pβ¦β435Aug 17, 2022Updated 3 years ago
- Foundation Architecture for (M)LLMsβ3,133Apr 11, 2024Updated 2 years ago
- Running large language models on a single GPU for throughput-oriented scenarios.β9,366Oct 28, 2024Updated last year