Tencent / TurboTransformersLinks
a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.
☆1,530Updated last month
Alternatives and similar repositories for TurboTransformers
Users that are interested in TurboTransformers are comparing it to the libraries listed below
Sorting:
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,287Updated 2 years ago
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆546Updated 4 years ago
- Running BERT without Padding☆474Updated 3 years ago
- Boosting your Web Services of Deep Learning Applications.☆1,242Updated 4 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,687Updated 9 months ago
- Bolt is a deep learning library with high performance and heterogeneous flexibility.☆954Updated 4 months ago
- Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.☆3,127Updated last year
- Bagua Speeds up PyTorch☆884Updated last year
- FastFormers - highly efficient transformer models for NLU☆707Updated 4 months ago
- FB (Facebook) + GEMM (General Matrix-Matrix Multiplication) - https://code.fb.com/ml-applications/fbgemm/☆1,426Updated this week
- A PyTorch-based knowledge distillation toolkit for natural language processing☆1,670Updated 2 years ago
- PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.☆764Updated 2 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆558Updated 3 years ago
- ☆412Updated last year
- EasyTransfer is designed to make the development of transfer learning in NLP applications easier.☆861Updated 2 years ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆992Updated 11 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,409Updated last year
- HugeCTR is a high efficiency GPU framework designed for Click-Through-Rate (CTR) estimating training☆1,023Updated 4 months ago
- ☆220Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆886Updated 7 months ago
- PyTorch extensions for high performance and large scale training.☆3,361Updated 3 months ago
- The score code of FastBERT (ACL2020)☆606Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,142Updated this week
- Repository for the paper "Optimal Subarchitecture Extraction for BERT"☆472Updated 3 years ago
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆262Updated 8 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated 2 weeks ago
- A primitive library for neural network☆1,348Updated 8 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,581Updated last year
- Transformer related optimization, including BERT, GPT☆6,274Updated last year