Tencent / TurboTransformersLinks
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
☆1,541Updated 6 months ago
Alternatives and similar repositories for TurboTransformers
Users that are interested in TurboTransformers are comparing it to the libraries listed below
Sorting:
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆547Updated 5 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,304Updated 2 years ago
- Running BERT without Padding☆476Updated 3 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Updated last year
- Boosting your Web Services of Deep Learning Applications.☆1,244Updated 4 years ago
- FastFormers - highly efficient transformer models for NLU☆709Updated 10 months ago
- Bagua Speeds up PyTorch☆884Updated last year
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆955Updated 9 months ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆779Updated 2 months ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,520Updated this week
- EasyTransfer is designed to make the development of transfer learning in NLP applications easier.☆862Updated 3 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆557Updated 4 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆916Updated last year
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆265Updated last year
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,041Updated 4 months ago
- PyTorch extensions for high performance and large scale training.☆3,397Updated 9 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆1,006Updated last year
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,156Updated 2 years ago
- ☆413Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,431Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- Mesh TensorFlow: Model Parallelism Made Easier☆1,624Updated 2 years ago
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,695Updated 2 years ago
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆470Updated 3 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- ☆219Updated 2 years ago
- ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators☆2,369Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,220Updated 5 months ago
- A primitive library for neural network☆1,368Updated last year
- The score code of FastBERT (ACL2020)☆609Updated 4 years ago