tile-ai / tvmLinks
Open deep learning compiler stack for cpu, gpu and specialized accelerators
☆19Updated this week
Alternatives and similar repositories for tvm
Users that are interested in tvm are comparing it to the libraries listed below
Sorting:
- GPTQ inference TVM kernel☆39Updated last year
- TensorRT LLM Benchmark Configuration☆13Updated last year
- ☆57Updated last week
- Quantized Attention on GPU☆44Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆45Updated 5 months ago
- ☆50Updated 6 months ago
- Persistent dense gemm for Hopper in `CuTeDSL`☆15Updated 3 months ago
- ☆109Updated 6 months ago
- Benchmark tests supporting the TiledCUDA library.☆17Updated last year
- ☆65Updated 6 months ago
- Framework to reduce autotune overhead to zero for well known deployments.☆85Updated 2 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 4 months ago
- ☆19Updated last year
- FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆29Updated 11 months ago
- Accelerate LLM preference tuning via prefix sharing with a single line of code☆51Updated 4 months ago
- ☆12Updated 10 months ago
- A Triton JIT runtime and ffi provider in C++☆29Updated 2 weeks ago
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆18Updated last year
- ☆121Updated 3 months ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆112Updated last year
- ☆39Updated 3 months ago
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- Transformers components but in Triton☆34Updated 6 months ago
- ☆33Updated 9 months ago
- ☆83Updated 9 months ago
- Multiple GEMM operators are constructed with cutlass to support LLM inference.☆20Updated 3 months ago
- DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling☆21Updated last month
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 4 months ago
- A Suite for Parallel Inference of Diffusion Transformers (DiTs) on multi-GPU Clusters☆52Updated last year
- ☆21Updated this week