fw-ai / llama-cuda-graph-example
Example of applying CUDA graphs to LLaMA-v2
☆12Updated last year
Alternatives and similar repositories for llama-cuda-graph-example:
Users that are interested in llama-cuda-graph-example are comparing it to the libraries listed below
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆111Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- ☆68Updated 3 weeks ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆115Updated 4 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆63Updated last week
- ☆55Updated last week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆100Updated last week
- extensible collectives library in triton☆85Updated 3 weeks ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆36Updated 11 months ago
- A minimal implementation of vllm.☆39Updated 8 months ago
- Transformers components but in Triton☆32Updated last month
- ☆26Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆81Updated 5 months ago
- Make triton easier☆47Updated 10 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆59Updated 6 months ago
- ☆103Updated 7 months ago
- The simplest implementation of recent Sparse Attention patterns for efficient LLM inference.☆59Updated 2 months ago
- ☆78Updated 5 months ago
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆116Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆68Updated 10 months ago
- Elixir: Train a Large Language Model on a Small GPU Cluster☆14Updated last year
- DeeperGEMM: crazy optimized version☆67Updated 3 weeks ago
- FlexAttention w/ FlashAttention3 Support☆26Updated 6 months ago
- Benchmark suite for LLMs from Fireworks.ai☆70Updated 2 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆159Updated 9 months ago
- Estimate MFU for DeepSeekV3☆22Updated 3 months ago
- Load compute kernels from the Hub☆115Updated this week
- ☆48Updated last year
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆126Updated 4 months ago