HFAiLab / ffrecord
FireFlyer Record file format, writer and reader for DL training samples.
☆116Updated last year
Related projects ⓘ
Alternatives and complementary repositories for ffrecord
- HFAI deep learning models☆87Updated last year
- The test of different distributed-training methods on High-Flyer AIHPC☆21Updated 2 years ago
- ☆208Updated last year
- Zero Bubble Pipeline Parallelism☆279Updated last week
- Dynamic Tensor Rematerialization prototype (modified PyTorch) and simulator. Paper: https://arxiv.org/abs/2006.09616☆129Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training. If you have any interests, …☆105Updated 10 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆217Updated 5 months ago
- Demystify RAM Usage in Multi-Process Data Loaders☆179Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆264Updated last year
- Tutorials for writing high-performance GPU operators in AI frameworks.☆122Updated last year
- Automated Parallelization System and Infrastructure for Multiple Ecosystems☆75Updated last week
- OneFlow models for benchmarking.☆104Updated 3 months ago
- ☆74Updated 10 months ago
- Slicing a PyTorch Tensor Into Parallel Shards☆296Updated 3 years ago
- 一种任务级GPU算力分时调度的高性能深度学习训练平台☆308Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆67Updated 3 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆96Updated this week
- ☆33Updated 2 months ago
- Megvii FILE Library - Working with Files in Python same as the standard library☆127Updated this week
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆390Updated this week
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆353Updated last week
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago
- ☆39Updated 3 years ago
- ☆30Updated last year
- A fast communication-overlapping library for tensor parallelism on GPUs.☆219Updated 2 weeks ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆196Updated 2 months ago
- The pure and clear PyTorch Distributed Training Framework.☆276Updated 9 months ago
- nnScaler: Compiling DNN models for Parallel Training☆64Updated 2 weeks ago
- Tutel MoE: An Optimized Mixture-of-Experts Implementation☆730Updated 2 weeks ago
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆265Updated last week