tspeterkim / mixed-precision-from-scratch
Mixed precision training from scratch with Tensors and CUDA
☆21Updated 8 months ago
Alternatives and similar repositories for mixed-precision-from-scratch:
Users that are interested in mixed-precision-from-scratch are comparing it to the libraries listed below
- ring-attention experiments☆116Updated 3 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆55Updated 2 months ago
- Cataloging released Triton kernels.☆156Updated last week
- ☆75Updated 6 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆64Updated 4 months ago
- ☆138Updated 11 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆111Updated last month
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆219Updated this week
- Experiment of using Tangent to autodiff triton☆74Updated 11 months ago
- Fast low-bit matmul kernels in Triton☆187Updated last week
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆67Updated 7 months ago
- ☆83Updated 7 months ago
- Repository for CPU Kernel Generation for LLM Inference☆25Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆75Updated this week
- ☆45Updated last year
- Collection of kernels written in Triton language☆90Updated 2 months ago
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆91Updated this week
- Repo hosting codes and materials related to speeding LLMs' inference using token merging.☆34Updated 8 months ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆106Updated 5 months ago
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆146Updated last month
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- A minimal implementation of vllm.☆32Updated 5 months ago
- Make triton easier☆42Updated 7 months ago
- Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆107Updated last month
- extensible collectives library in triton☆76Updated 3 months ago
- ☆170Updated last week
- KV cache compression for high-throughput LLM inference☆103Updated last month
- ☆107Updated 3 months ago
- This repository contains the experimental PyTorch native float8 training UX☆219Updated 5 months ago
- Code for Palu: Compressing KV-Cache with Low-Rank Projection☆63Updated 2 months ago