Tencent / PatrickStar
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.
☆754Updated last year
Alternatives and similar repositories for PatrickStar:
Users that are interested in PatrickStar are comparing it to the libraries listed below
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆397Updated 2 months ago
- Bagua Speeds up PyTorch☆877Updated 5 months ago
- Large-scale model inference.☆628Updated last year
- Running BERT without Padding☆468Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,503Updated last year
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆746Updated this week
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆834Updated 2 weeks ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,016Updated 9 months ago
- ☆211Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,549Updated 11 months ago
- A primitive library for neural network☆1,308Updated last month
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,237Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆467Updated 10 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆972Updated 3 months ago
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆398Updated this week
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆266Updated last year
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆542Updated 4 years ago
- Library for 8-bit optimizers and quantization routines.☆717Updated 2 years ago
- Microsoft Automatic Mixed Precision Library☆549Updated 3 months ago
- A fast MoE impl for PyTorch☆1,596Updated 6 months ago
- MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器☆476Updated 2 months ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,671Updated 2 months ago
- FastFormers - highly efficient transformer models for NLU☆703Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,354Updated 9 months ago
- Efficient Inference for Big Models☆574Updated last year
- An open-source efficient deep learning framework/compiler, written in python.☆668Updated this week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,942Updated last month
- Fast Inference Solutions for BLOOM☆563Updated 3 months ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆550Updated 2 years ago