Automated Parallelization System and Infrastructure for Multiple Ecosystems
☆81Nov 19, 2024Updated last year
Alternatives and similar repositories for easydist
Users that are interested in easydist are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- TePDist (TEnsor Program DISTributed) is an HLO-level automatic distributed system for DL models.☆98Apr 22, 2023Updated 2 years ago
- Accelerate Video Diffusion Inference via Sketching-Rendering Cooperation☆19Jun 11, 2025Updated 9 months ago
- The official implementation of "Helen: Optimizing CTR Prediction Models with Frequency-wise Hessian Eigenvalue Regularization"☆16Mar 14, 2024Updated 2 years ago
- A curated list of awesome projects and papers for distributed training or inference☆267Oct 8, 2024Updated last year
- Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.☆70Mar 20, 2025Updated last year
- Democratizing AlphaFold3: an PyTorch reimplementation to accelerate protein structure prediction☆55Dec 16, 2024Updated last year
- A baseline repository of Auto-Parallelism in Training Neural Networks☆147Jun 25, 2022Updated 3 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Sep 10, 2024Updated last year
- 面向多平台编译优化的深度学习中间表示☆10Oct 28, 2024Updated last year
- Zero Bubble Pipeline Parallelism☆452May 7, 2025Updated 10 months ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆921Dec 30, 2024Updated last year
- Pipeline Parallelism for PyTorch☆785Aug 21, 2024Updated last year
- This repository organizes the Imagnet1k dataset into 10 coarse classes, where each class consists of semantically similar image categorie…☆22Dec 11, 2023Updated 2 years ago
- A library to analyze PyTorch traces.☆474Mar 17, 2026Updated last week
- A Python library transfers PyTorch tensors between CPU and NVMe☆125Nov 27, 2024Updated last year
- Optimizing AlphaFold Training and Inference on GPU Clusters☆613Jul 16, 2024Updated last year
- [MLSys 2023] Pre-train and Search: Efficient Embedding Table Sharding with Pre-trained Neural Cost Models☆16May 5, 2023Updated 2 years ago
- Scalable PaLM implementation of PyTorch☆190Dec 19, 2022Updated 3 years ago
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,273Aug 28, 2025Updated 6 months ago
- study of cutlass☆22Nov 10, 2024Updated last year
- [ACL 2023] The official implementation of "CAME: Confidence-guided Adaptive Memory Optimization"☆97Mar 22, 2025Updated last year
- Distributed Compiler based on Triton for Parallel Systems☆1,394Mar 11, 2026Updated last week
- MSCCL++: A GPU-driven communication stack for scalable AI applications☆490Updated this week
- Performance benchmarking with ColossalAI☆39Jul 6, 2022Updated 3 years ago
- Large-scale model inference.☆627Sep 12, 2023Updated 2 years ago
- ☆24Feb 20, 2024Updated 2 years ago
- TACOS: [T]opology-[A]ware [Co]llective Algorithm [S]ynthesizer for Distributed Machine Learning☆32Jun 13, 2025Updated 9 months ago
- MAGIS: Memory Optimization via Coordinated Graph Transformation and Scheduling for DNN (ASPLOS'24)☆57May 29, 2024Updated last year
- Minimal Decision Transformer Implementation written in Jax (Flax).☆17Aug 8, 2022Updated 3 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆653Jan 15, 2026Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆13Feb 11, 2026Updated last month
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- Synthesizer for optimal collective communication algorithms☆123Apr 8, 2024Updated last year
- Distributed SDDMM Kernel☆12Jul 8, 2022Updated 3 years ago
- ☆23Aug 21, 2025Updated 7 months ago
- ☆78May 4, 2021Updated 4 years ago
- Examples of CUDA implementations by Cutlass CuTe☆270Jul 1, 2025Updated 8 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- Keyformer proposes KV Cache reduction through key tokens identification and without the need for fine-tuning☆57Mar 26, 2024Updated last year