yqhu / profiler-workshopLinks
Example code for profiler workshop
☆33Updated 3 years ago
Alternatives and similar repositories for profiler-workshop
Users that are interested in profiler-workshop are comparing it to the libraries listed below
Sorting:
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆210Updated 9 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆307Updated 11 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆93Updated last week
- This repository contains integer operators on GPUs for PyTorch.☆205Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆163Updated 10 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆76Updated 9 months ago
- Applied AI experiments and examples for PyTorch☆274Updated last week
- ☆105Updated 9 months ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆109Updated 10 months ago
- ☆146Updated 10 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆357Updated 9 months ago
- Fast low-bit matmul kernels in Triton☆311Updated this week
- PyTorch bindings for CUTLASS grouped GEMM.☆125Updated 5 months ago
- ☆252Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆292Updated 3 months ago
- Cataloging released Triton kernels.☆229Updated 4 months ago
- [ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference☆292Updated 6 months ago
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆211Updated last year
- ☆157Updated last year
- Latency and Memory Analysis of Transformer Models for Training and Inference☆424Updated last month
- Triton-based implementation of Sparse Mixture of Experts.☆217Updated 6 months ago
- [ICLR 2025] Palu: Compressing KV-Cache with Low-Rank Projection☆116Updated 3 months ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 10 months ago
- ☆208Updated 10 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆271Updated last year
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆303Updated 4 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated 11 months ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆92Updated last year
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆127Updated this week