tspeterkim / mixed-precision-from-scratchLinks
Mixed precision training from scratch with Tensors and CUDA
☆24Updated last year
Alternatives and similar repositories for mixed-precision-from-scratch
Users that are interested in mixed-precision-from-scratch are comparing it to the libraries listed below
Sorting:
- ring-attention experiments☆143Updated 9 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆78Updated last month
- ☆161Updated last year
- Load compute kernels from the Hub☆207Updated this week
- ☆119Updated last month
- Cataloging released Triton kernels.☆245Updated 6 months ago
- The evaluation framework for training-free sparse attention in LLMs☆83Updated last month
- Learn CUDA with PyTorch☆29Updated this week
- Collection of kernels written in Triton language☆136Updated 3 months ago
- ☆74Updated 3 weeks ago
- Fast low-bit matmul kernels in Triton☆330Updated last week
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆142Updated last month
- ☆225Updated last week
- Code for studying the super weight in LLM☆113Updated 7 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆138Updated 11 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆128Updated 7 months ago
- Applied AI experiments and examples for PyTorch☆286Updated last month
- ☆225Updated last month
- ☆112Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 10 months ago
- 🚀 Efficiently (pre)training foundation models with native PyTorch features, including FSDP for training and SDPA implementation of Flash…☆256Updated last week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆121Updated 7 months ago
- Small scale distributed training of sequential deep learning models, built on Numpy and MPI.☆135Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆225Updated 7 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆165Updated last year
- ☆51Updated last year
- a minimal cache manager for PagedAttention, on top of llama3.☆93Updated 10 months ago
- A curated collection of resources, tutorials, and best practices for learning and mastering NVIDIA CUTLASS☆196Updated 2 months ago
- A repository to unravel the language of GPUs, making their kernel conversations easy to understand☆188Updated last month