Tencent / TurboTransformersLinks
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
☆1,526Updated 3 months ago
Alternatives and similar repositories for TurboTransformers
Users that are interested in TurboTransformers are comparing it to the libraries listed below
Sorting:
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆544Updated 4 years ago
- Running BERT without Padding☆472Updated 3 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,282Updated 2 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Updated 8 months ago
- Boosting your Web Services of Deep Learning Applications.☆1,241Updated 4 years ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆950Updated 3 months ago
- FastFormers - highly efficient transformer models for NLU☆705Updated 3 months ago
- Bagua Speeds up PyTorch☆882Updated 11 months ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆761Updated 2 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆875Updated 6 months ago
- EasyTransfer is designed to make the development of transfer learning in NLP applications easier.☆861Updated 2 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆553Updated 3 years ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,399Updated this week
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,014Updated 3 months ago
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆261Updated 7 months ago
- ☆411Updated last year
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,116Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,401Updated last year
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆989Updated 9 months ago
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,662Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- ☆220Updated last year
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆473Updated 3 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated 2 weeks ago
- Dive into Deep Learning Compiler☆646Updated 3 years ago
- Mesh TensorFlow: Model Parallelism Made Easier☆1,611Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,099Updated 3 months ago
- PyTorch extensions for high performance and large scale training.☆3,337Updated 2 months ago
- Adlik: Toolkit for Accelerating Deep Learning Inference☆801Updated last year
- The score code of FastBERT (ACL2020)☆606Updated 3 years ago