Tencent / PatrickStarLinks
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.
☆767Updated 2 years ago
Alternatives and similar repositories for PatrickStar
Users that are interested in PatrickStar are comparing it to the libraries listed below
Sorting:
- Bagua Speeds up PyTorch☆883Updated last year
- Large-scale model inference.☆632Updated 2 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated last month
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,530Updated 2 months ago
- ☆220Updated 2 years ago
- Running BERT without Padding☆475Updated 3 years ago
- Examples of training models with hybrid parallelism using ColossalAI☆340Updated 2 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,290Updated 2 years ago
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆409Updated 3 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Updated last year
- ☆412Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆268Updated 2 years ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆895Updated 8 months ago
- Efficient Inference for Big Models☆589Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆924Updated last week
- DeepLearning Framework Performance Profiling Toolkit☆290Updated 3 years ago
- Microsoft Automatic Mixed Precision Library☆620Updated 11 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,582Updated last year
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆265Updated 9 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,418Updated last year
- ☆616Updated last year
- OneFlow models for benchmarking.☆104Updated last year
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,062Updated last year
- A fast MoE impl for PyTorch☆1,790Updated 7 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,006Updated last year
- A primitive library for neural network☆1,359Updated 10 months ago
- The road to hack SysML and become an system expert☆499Updated last year
- Efficient Training (including pre-training and fine-tuning) for Big Models☆609Updated last month
- MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging…☆194Updated 2 years ago
- A library for high performance deep learning inference on NVIDIA GPUs.☆558Updated 3 years ago