Lightning-AI / lightning-thunder
Thunder gives you PyTorch models superpowers for training and inference. Unlock out-of-the-box optimizations for performance, memory and parallelism, or roll out your own.
☆1,342Updated this week
Alternatives and similar repositories for lightning-thunder
Users that are interested in lightning-thunder are comparing it to the libraries listed below
Sorting:
- PyTorch native quantization and sparsity for training and inference☆2,041Updated this week
- A PyTorch native platform for training generative AI models☆3,808Updated this week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆472Updated this week
- Puzzles for learning Triton☆1,623Updated 5 months ago
- Schedule-Free Optimization in PyTorch☆2,161Updated last month
- TensorDict is a pytorch dedicated tensor container.☆925Updated this week
- A pytorch quantization backend for optimum☆935Updated 3 weeks ago
- Minimalistic large language model 3D-parallelism training☆1,870Updated this week
- Minimalistic 4D-parallelism distributed training framework for education purpose☆1,464Updated 2 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆536Updated this week
- Tile primitives for speedy kernels☆2,339Updated this week
- GPU programming related news and material links☆1,501Updated 4 months ago
- UNet diffusion model in pure CUDA☆602Updated 10 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,412Updated this week
- For optimization algorithm research and development.☆513Updated this week
- NanoGPT (124M) in 3 minutes☆2,546Updated 3 weeks ago
- Helpful tools and examples for working with flex-attention☆766Updated last week
- What would you do with 1000 H100s...☆1,045Updated last year
- ☆961Updated 3 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,044Updated last year
- A modern model graph visualizer and debugger☆1,179Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆810Updated this week
- Reaching LLaMA2 Performance with 0.1M Dollars☆980Updated 9 months ago
- A simple, performant and scalable Jax LLM!☆1,717Updated this week
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆871Updated 2 weeks ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,348Updated this week
- nvidia-modelopt is a unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculat…☆909Updated last week
- Best practices & guides on how to write distributed pytorch training code☆418Updated 2 months ago
- PyTorch native post-training library☆5,171Updated this week
- ☆455Updated 10 months ago