Lightning-AI / lightning-thunderLinks
PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily write your own.
☆1,416Updated this week
Alternatives and similar repositories for lightning-thunder
Users that are interested in lightning-thunder are comparing it to the libraries listed below
Sorting:
- PyTorch native quantization and sparsity for training and inference☆2,384Updated this week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆541Updated last week
- A modern model graph visualizer and debugger☆1,318Updated this week
- A PyTorch native platform for training generative AI models☆4,476Updated this week
- UNet diffusion model in pure CUDA☆647Updated last year
- Schedule-Free Optimization in PyTorch☆2,215Updated 4 months ago
- A pytorch quantization backend for optimum☆987Updated last month
- Minimalistic large language model 3D-parallelism training☆2,239Updated last month
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆576Updated last month
- ☆1,004Updated 7 months ago
- TensorDict is a pytorch dedicated tensor container.☆967Updated last week
- Puzzles for learning Triton☆2,008Updated 10 months ago
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,836Updated last month
- Tile primitives for speedy kernels☆2,767Updated last week
- Official implementation of Half-Quadratic Quantization (HQQ)☆877Updated 3 weeks ago
- Best practices & guides on how to write distributed pytorch training code☆487Updated 7 months ago
- Training LLMs with QLoRA + FSDP☆1,529Updated 10 months ago
- A simple, performant and scalable Jax LLM!☆1,917Updated this week
- For optimization algorithm research and development.☆539Updated last week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆813Updated 2 months ago
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,821Updated 3 months ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆912Updated 5 months ago
- Scalable and Performant Data Loading☆304Updated last week
- NanoGPT (124M) in 3 minutes☆3,145Updated 2 months ago
- Helpful tools and examples for working with flex-attention☆997Updated 3 weeks ago
- Fault tolerance for PyTorch (HSDP, LocalSGD, DiLoCo, Streaming DiLoCo)☆410Updated last week
- ☆537Updated last year
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆687Updated last year
- What would you do with 1000 H100s...☆1,109Updated last year
- The Autograd Engine☆637Updated last year