Tencent / TurboTransformersLinks
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
☆1,532Updated 3 months ago
Alternatives and similar repositories for TurboTransformers
Users that are interested in TurboTransformers are comparing it to the libraries listed below
Sorting:
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,296Updated 2 years ago
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆546Updated 4 years ago
- Running BERT without Padding☆475Updated 3 years ago
- Boosting your Web Services of Deep Learning Applications.☆1,245Updated 4 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆952Updated 6 months ago
- FastFormers - highly efficient transformer models for NLU☆707Updated 7 months ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆766Updated 2 years ago
- EasyTransfer is designed to make the development of transfer learning in NLP applications easier.☆859Updated 3 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Updated last year
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,680Updated 2 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆557Updated 3 years ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,140Updated last year
- Bagua Speeds up PyTorch☆883Updated last year
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆899Updated 9 months ago
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,035Updated last month
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆994Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,422Updated last year
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,453Updated this week
- 飞桨大模型开发套件,提供大语言模型、跨模态大模型、生物计算大模型等领域的全流程开发工具链。☆474Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆407Updated 2 months ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆471Updated 3 years ago
- Dive into Deep Learning Compiler☆646Updated 3 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Updated last year
- A primitive library for neural network☆1,363Updated 10 months ago
- A fast MoE impl for PyTorch☆1,806Updated 8 months ago
- PyTorch extensions for high performance and large scale training.☆3,380Updated 5 months ago
- The score code of FastBERT (ACL2020)☆609Updated 3 years ago
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆265Updated 10 months ago
- ☆413Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,586Updated last year