Tencent / PatrickStarLinks
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.
☆779Updated 2 months ago
Alternatives and similar repositories for PatrickStar
Users that are interested in PatrickStar are comparing it to the libraries listed below
Sorting:
- Large-scale model inference.☆627Updated 2 years ago
- Bagua Speeds up PyTorch☆884Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- ☆219Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,541Updated 6 months ago
- Running BERT without Padding☆476Updated 3 years ago
- Examples of training models with hybrid parallelism using ColossalAI☆339Updated 2 years ago
- ☆413Updated 2 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,304Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆960Updated last month
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆916Updated last year
- Efficient Inference for Big Models☆585Updated 3 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- High performance distributed framework for training deep learning recommendation models based on PyTorch.☆411Updated 7 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆265Updated last year
- Microsoft Automatic Mixed Precision Library☆635Updated 2 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,071Updated last year
- Models and examples built with OneFlow☆101Updated last year
- OneFlow models for benchmarking.☆104Updated last year
- A primitive library for neural network☆1,368Updated last year
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,587Updated this week
- The road to hack SysML and become an system expert☆510Updated last year
- MLModelCI is a complete MLOps platform for managing, converting, profiling, and deploying MLaaS (Machine Learning-as-a-Service), bridging…☆197Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,431Updated last year
- ☆624Updated last month
- Efficient Training (including pre-training and fine-tuning) for Big Models☆618Updated 3 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,008Updated last year
- Adlik: Toolkit for Accelerating Deep Learning Inference☆810Updated 2 years ago