Lightning-AI / litdata
Transform datasets at scale. Optimize datasets for fast AI model training.
☆406Updated this week
Alternatives and similar repositories for litdata:
Users that are interested in litdata are comparing it to the libraries listed below
- Scalable and Performant Data Loading☆210Updated this week
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,268Updated this week
- TensorDict is a pytorch dedicated tensor container.☆868Updated this week
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆225Updated last week
- For optimization algorithm research and development.☆486Updated last week
- Helpful tools and examples for working with flex-attention☆603Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆511Updated this week
- PyTorch per step fault tolerance (actively under development)☆226Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆670Updated this week
- PyTorch native quantization and sparsity for training and inference☆1,783Updated this week
- Universal Tensor Operations in Einstein-Inspired Notation for Python.☆344Updated 2 months ago
- Annotated version of the Mamba paper☆470Updated 11 months ago
- ☆296Updated 7 months ago
- ☆339Updated 2 weeks ago
- Pipeline Parallelism for PyTorch☆739Updated 5 months ago
- Minimalistic large language model 3D-parallelism training☆1,400Updated this week
- PyTorch video decoding☆227Updated this week
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆341Updated last week
- A pytorch quantization backend for optimum☆870Updated 2 weeks ago
- Common Python utilities and GitHub Actions in Lightning Ecosystem☆51Updated this week
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆431Updated 5 months ago
- Best practices & guides on how to write distributed pytorch training code☆342Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆498Updated 3 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,019Updated 9 months ago
- TorchFix - a linter for PyTorch-using code with autofix support☆122Updated 3 weeks ago
- Minimal sharded dataset loaders, decoders, and utils for multi-modal document, image, and text datasets.☆155Updated 9 months ago
- Implementation of Diffusion Transformer (DiT) in JAX☆261Updated 7 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆216Updated this week
- Website for hosting the Open Foundation Models Cheat Sheet.☆263Updated 7 months ago
- Legible, Scalable, Reproducible Foundation Models with Named Tensors and Jax☆536Updated this week