NVIDIA / LDDL
Distributed preprocessing and data loading for language datasets
☆39Updated 7 months ago
Related projects ⓘ
Alternatives and complementary repositories for LDDL
- ☆55Updated 5 months ago
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆146Updated 2 weeks ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆98Updated last week
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago
- Applied AI experiments and examples for PyTorch☆166Updated 3 weeks ago
- extensible collectives library in triton☆71Updated last month
- oneCCL Bindings for Pytorch*☆86Updated 3 weeks ago
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆155Updated 2 weeks ago
- FTPipe and related pipeline model parallelism research.☆41Updated last year
- Torch Distributed Experimental☆116Updated 3 months ago
- Research and development for optimizing transformers☆125Updated 3 years ago
- MLPerf™ logging library☆30Updated last week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆57Updated 2 months ago
- Benchmarks to capture important workloads.☆28Updated 5 months ago
- ☆88Updated 2 months ago
- Memory Optimizations for Deep Learning (ICML 2023)☆60Updated 8 months ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆270Updated this week
- A schedule language for large model training☆141Updated 5 months ago
- ☆111Updated 8 months ago
- ☆12Updated last month
- This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.☆37Updated 8 months ago
- PyTorch RFCs (experimental)☆130Updated 2 months ago
- ☆45Updated 2 weeks ago
- NVIDIA Resiliency Extension is a python package for framework developers and users to implement fault-tolerant features. It improves the …☆38Updated last month
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆90Updated 4 months ago
- This repository contains the experimental PyTorch native float8 training UX☆211Updated 3 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆165Updated this week
- ☆26Updated 3 years ago
- ☆236Updated 3 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆195Updated 3 months ago