HazyResearch / ThunderKittens
Tile primitives for speedy kernels
☆1,489Updated this week
Related projects: ⓘ
- Puzzles for learning Triton☆966Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆558Updated 5 months ago
- CUDA related news and material links☆1,079Updated 2 weeks ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆452Updated last week
- PyTorch native quantization and sparsity for training and inference☆726Updated this week
- FlashInfer: Kernel Library for LLM Serving☆1,138Updated this week
- A native PyTorch Library for large model training☆1,544Updated this week
- Efficient implementations of state-of-the-art linear attention models in Pytorch and Triton☆1,190Updated this week
- Make PyTorch models up to 40% faster! Thunder is a source to source compiler for PyTorch. It enables using different hardware executors a…☆1,131Updated this week
- Open weights language model from Google DeepMind, based on Griffin.☆592Updated 2 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,811Updated this week
- An open-source efficient deep learning framework/compiler, written in python.☆646Updated 3 weeks ago
- UNet diffusion model in pure CUDA☆562Updated 2 months ago
- What would you do with 1000 H100s...☆816Updated 8 months ago
- A multi-level tensor algebra superoptimizer☆314Updated this week
- Schedule-Free Optimization in PyTorch☆1,800Updated last month
- ☆1,164Updated last week
- Minimalistic large language model 3D-parallelism training☆1,111Updated this week
- A JAX research toolkit for building, editing, and visualizing neural networks.☆1,638Updated last week
- Pipeline Parallelism for PyTorch☆708Updated 3 weeks ago
- Serving multiple LoRA finetuned LLM as one☆946Updated 4 months ago
- nanoGPT style version of Llama 3.1☆1,162Updated last month
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,099Updated 7 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,333Updated 2 months ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆994Updated 5 months ago
- Material for cuda-mode lectures☆2,401Updated 2 weeks ago
- A simple, performant and scalable Jax LLM!☆1,450Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,180Updated 2 months ago
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆2,577Updated this week
- A pytorch quantization backend for optimum☆758Updated this week