PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP and democratizes AI for everyone.
☆776Nov 18, 2025Updated 5 months ago
Alternatives and similar repositories for PatrickStar
Users that are interested in PatrickStar are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,547Jul 18, 2025Updated 9 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆221Aug 19, 2024Updated last year
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,300May 16, 2023Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,408Apr 26, 2025Updated last year
- Bagua Speeds up PyTorch☆882Aug 1, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆924Dec 30, 2024Updated last year
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆133Jul 6, 2023Updated 2 years ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,324Updated this week
- Training and serving large-scale neural networks with auto parallelization.☆3,187Dec 9, 2023Updated 2 years ago
- ☆28Jul 11, 2021Updated 4 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Mar 31, 2023Updated 3 years ago
- PSTensor provides a way to hack the memory management of tensors in TensorFlow and PyTorch by defining your own C++ Tensor Class.☆10Feb 10, 2022Updated 4 years ago
- ☆220Aug 17, 2023Updated 2 years ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆988Apr 11, 2026Updated 3 weeks ago
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Transformer related optimization, including BERT, GPT☆6,415Mar 27, 2024Updated 2 years ago
- Ongoing research training transformer models at scale☆16,253Updated this week
- Ring attention implementation with flash attention☆1,015Sep 10, 2025Updated 8 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,878Updated this week
- A baseline repository of Auto-Parallelism in Training Neural Networks☆146Jun 25, 2022Updated 3 years ago
- Making large AI models cheaper, faster and more accessible☆41,379Apr 27, 2026Updated last week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Mar 15, 2024Updated 2 years ago
- Microsoft Automatic Mixed Precision Library☆636Dec 1, 2025Updated 5 months ago
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆80Nov 19, 2024Updated last year
- Managed Kubernetes at scale on DigitalOcean • AdDigitalOcean Kubernetes includes the control plane, bandwidth allowance, container registry, automatic updates, and more for free.
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,281Updated this week
- Pipeline Parallelism for PyTorch☆786Aug 21, 2024Updated last year
- High performance NCCL plugin for Bagua.☆15Sep 15, 2021Updated 4 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,076Apr 17, 2024Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated last year
- OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.☆9,391Dec 4, 2025Updated 5 months ago
- Running BERT without Padding☆479Mar 18, 2022Updated 4 years ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,717Apr 9, 2026Updated last month
- A high performance and generic framework for distributed DNN training☆3,717Oct 3, 2023Updated 2 years ago
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆666Jan 15, 2026Updated 3 months ago
- ☆17Dec 9, 2022Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,246Aug 14, 2025Updated 8 months ago
- Slicing a PyTorch Tensor Into Parallel Shards☆300Jun 7, 2025Updated 11 months ago
- Optimized primitives for collective multi-GPU communication☆4,680Updated this week
- A fast MoE impl for PyTorch☆1,846Feb 10, 2025Updated last year
- Development repository for the Triton language and compiler☆19,124Updated this week