Lightning-AI / lightning-thunder
Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors at once; across one or thousands of GPUs.
☆1,131Updated this week
Related projects: ⓘ
- A native PyTorch Library for large model training☆1,544Updated this week
- Puzzles for learning Triton☆966Updated this week
- Schedule-Free Optimization in PyTorch☆1,800Updated last month
- PyTorch native quantization and sparsity for training and inference☆726Updated this week
- Tile primitives for speedy kernels☆1,489Updated this week
- Minimalistic large language model 3D-parallelism training☆1,111Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆452Updated last week
- UNet diffusion model in pure CUDA☆562Updated 2 months ago
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,638Updated last week
- Transform datasets at scale. Optimize datasets for fast AI model training.☆318Updated this week
- A modern model graph visualizer and debugger☆976Updated this week
- Official implementation of "Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling"☆774Updated 3 weeks ago
- A simple, performant and scalable Jax LLM!☆1,450Updated this week
- Training LLMs with QLoRA + FSDP☆1,382Updated last week
- ☆870Updated this week
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,354Updated last week
- Deep learning for dummies. All the practical details and useful utilities that go into working with real models.☆662Updated last month
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆1,935Updated last week
- A Data Streaming Library for Efficient Neural Network Training☆1,076Updated this week
- A pytorch quantization backend for optimum☆758Updated this week
- TensorDict is a pytorch dedicated tensor container.☆807Updated this week
- Official implementation of Half-Quadratic Quantization (HQQ)☆659Updated this week
- nanoGPT style version of Llama 3.1☆1,162Updated last month
- High-quality datasets, tools, and concepts for LLM fine-tuning.☆1,664Updated last month
- llama3.np is a pure NumPy implementation for Llama 3 model.☆955Updated 3 months ago
- What would you do with 1000 H100s...☆816Updated 8 months ago
- A Native-PyTorch Library for LLM Fine-tuning☆3,942Updated this week
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆679Updated 3 weeks ago
- ☆1,164Updated last week
- Open weights language model from Google DeepMind, based on Griffin.☆592Updated 2 months ago