TiledTensor / TiledBench
Benchmark tests supporting the TiledCUDA library.
☆12Updated last month
Alternatives and similar repositories for TiledBench:
Users that are interested in TiledBench are comparing it to the libraries listed below
- ☆36Updated this week
- TensorRT LLM Benchmark Configuration☆12Updated 5 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆57Updated last month
- ☆19Updated 3 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated 3 months ago
- GPTQ inference TVM kernel☆38Updated 8 months ago
- Transformers components but in Triton☆29Updated 2 months ago
- An Attention Superoptimizer☆20Updated 8 months ago
- Quantized Attention on GPU☆34Updated last month
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆17Updated 2 weeks ago
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆21Updated 2 weeks ago
- ☆22Updated 3 weeks ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- Debug print operator for cudagraph debugging☆10Updated 5 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆75Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- ☆22Updated last month
- ☆11Updated 3 years ago
- ⚡️Write HGEMM from scratch using Tensor Cores with WMMA, MMA and CuTe API, achieve peak⚡️ performance☆43Updated this week
- An auxiliary project analysis of the characteristics of KV in DiT Attention.☆23Updated last month
- Implement Flash Attention using Cute.☆65Updated last month
- CUDA 12.2 HMM demos☆19Updated 5 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆37Updated 5 months ago
- Tacker: Tensor-CUDA Core Kernel Fusion for Improving the GPU Utilization while Ensuring QoS☆18Updated 3 years ago
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 7 months ago
- Official resporitory for "IPDPS' 24 QSync: Quantization-Minimized Synchronous Distributed Training Across Hybrid Devices".☆19Updated 10 months ago
- ThrillerFlow is a Dataflow Analysis and Codegen Framework written in Rust.☆14Updated last month
- 📚[WIP] FFPA: Yet antother Faster Flash Prefill Attention with O(1)⚡️GPU SRAM complexity for headdim > 256, 1.8x~3x↑🎉faster vs SDPA EA.☆49Updated this week
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago