Lightning-AI / lightning-thunder
Thunder gives you PyTorch models superpowers for training and inference. Unlock out-of-the-box optimizations for performance, memory and parallelism, or roll out your own.
☆1,314Updated this week
Alternatives and similar repositories for lightning-thunder:
Users that are interested in lightning-thunder are comparing it to the libraries listed below
- PyTorch native quantization and sparsity for training and inference☆1,927Updated this week
- A PyTorch native library for large model training☆3,506Updated this week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆436Updated this week
- Minimalistic large language model 3D-parallelism training☆1,737Updated this week
- A pytorch quantization backend for optimum☆907Updated 3 weeks ago
- Puzzles for learning Triton☆1,540Updated 4 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆524Updated last month
- Minimalistic 4D-parallelism distributed training framework for education purpose☆970Updated 3 weeks ago
- Tile primitives for speedy kernels☆2,208Updated this week
- TensorDict is a pytorch dedicated tensor container.☆901Updated this week
- Schedule-Free Optimization in PyTorch☆2,125Updated last week
- GPU programming related news and material links☆1,436Updated 2 months ago
- Pipeline Parallelism for PyTorch☆760Updated 7 months ago
- NanoGPT (124M) in 3 minutes☆2,427Updated 2 weeks ago
- A modern model graph visualizer and debugger☆1,155Updated this week
- Helpful tools and examples for working with flex-attention☆701Updated 2 weeks ago
- UNet diffusion model in pure CUDA☆600Updated 9 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆772Updated this week
- A simple, performant and scalable Jax LLM!☆1,669Updated this week
- ☆943Updated last month
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆857Updated last month
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,228Updated 3 weeks ago
- What would you do with 1000 H100s...☆1,024Updated last year
- For optimization algorithm research and development.☆503Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆2,311Updated this week
- Implementation of 💍 Ring Attention, from Liu et al. at Berkeley AI, in Pytorch☆509Updated 5 months ago
- PyTorch video decoding☆465Updated this week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆782Updated 3 weeks ago
- An open-source efficient deep learning framework/compiler, written in python.☆692Updated last month
- PyTorch native post-training library☆5,041Updated this week