NVIDIA / LDDLLinks
Distributed preprocessing and data loading for language datasets
☆39Updated last year
Alternatives and similar repositories for LDDL
Users that are interested in LDDL are comparing it to the libraries listed below
Sorting:
- A performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind…☆158Updated 2 months ago
- Torch Distributed Experimental☆117Updated last year
- Reference models for Intel(R) Gaudi(R) AI Accelerator☆167Updated 2 weeks ago
- Research and development for optimizing transformers☆129Updated 4 years ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆120Updated 8 months ago
- ☆74Updated 4 months ago
- ☆110Updated 11 months ago
- PyTorch RFCs (experimental)☆134Updated 2 months ago
- oneCCL Bindings for Pytorch*☆100Updated 2 weeks ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆208Updated last week
- Training material for IPU users: tutorials, feature examples, simple applications☆86Updated 2 years ago
- ☆118Updated last year
- ☆120Updated last year
- ☆251Updated last year
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- This repository contains the results and code for the MLPerf™ Training v1.0 benchmark.☆37Updated last year
- extensible collectives library in triton☆88Updated 4 months ago
- Implementation of a Transformer, but completely in Triton☆273Updated 3 years ago
- The Triton backend for the PyTorch TorchScript models.☆158Updated 3 weeks ago
- ☆39Updated last year
- Home for OctoML PyTorch Profiler☆113Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- Simple Distributed Deep Learning on TensorFlow☆133Updated 2 months ago
- Applied AI experiments and examples for PyTorch☆291Updated this week
- A parallel framework for training deep neural networks☆63Updated 5 months ago
- A tensor-aware point-to-point communication primitive for machine learning☆262Updated last week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 11 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆214Updated last year
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆62Updated last month
- A bunch of kernels that might make stuff slower 😉☆58Updated this week