fw-ai / llama-cuda-graph-exampleLinks
Example of applying CUDA graphs to LLaMA-v2
☆12Updated 2 years ago
Alternatives and similar repositories for llama-cuda-graph-example
Users that are interested in llama-cuda-graph-example are comparing it to the libraries listed below
Sorting:
- ☆71Updated 9 months ago
- ☆27Updated 2 years ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆90Updated last year
- Ship correct and fast LLM kernels to PyTorch☆127Updated 3 weeks ago
- Triton-based Symmetric Memory operators and examples☆72Updated 2 months ago
- extensible collectives library in triton☆91Updated 9 months ago
- Hydragen: High-Throughput LLM Inference with Shared Prefixes☆45Updated last year
- ☆15Updated 2 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆92Updated 3 months ago
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆135Updated last year
- ring-attention experiments☆161Updated last year
- ☆15Updated 5 months ago
- ☆116Updated 7 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆53Updated last year
- ☆115Updated last year
- Triton-based implementation of Sparse Mixture of Experts.☆259Updated 3 months ago
- Tree Attention: Topology-aware Decoding for Long-Context Attention on GPU clusters☆131Updated last year
- Repository for CPU Kernel Generation for LLM Inference☆27Updated 2 years ago
- Efficient Long-context Language Model Training by Core Attention Disaggregation☆73Updated last week
- GPTQ inference TVM kernel☆41Updated last year
- ☆52Updated 7 months ago
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- Autonomous GPU Kernel Generation via Deep Agents☆202Updated this week
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆175Updated last year
- ☆96Updated 9 months ago
- ☆45Updated 2 years ago
- A bunch of kernels that might make stuff slower 😉☆73Updated this week
- The evaluation framework for training-free sparse attention in LLMs☆108Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆218Updated 3 weeks ago