ptillet / triton-llvm-releasesLinks
☆22Updated last year
Alternatives and similar repositories for triton-llvm-releases
Users that are interested in triton-llvm-releases are comparing it to the libraries listed below
Sorting:
- Benchmark tests supporting the TiledCUDA library.☆17Updated 10 months ago
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- ☆50Updated last year
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆24Updated this week
- GPTQ inference TVM kernel☆39Updated last year
- CUDA 12.2 HMM demos☆20Updated last year
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆97Updated 3 months ago
- ☆15Updated last year
- ☆43Updated 5 months ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
- Framework to reduce autotune overhead to zero for well known deployments.☆84Updated 2 weeks ago
- PyTorch implementation of the Flash Spectral Transform Unit.☆18Updated last year
- ☆22Updated last year
- FP64 equivalent GEMM via Int8 Tensor Cores using the Ozaki scheme☆87Updated 6 months ago
- Ahead of Time (AOT) Triton Math Library☆77Updated this week
- ☆90Updated 11 months ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆64Updated 5 months ago
- TORCH_LOGS parser for PT2☆61Updated 2 weeks ago
- Hacks for PyTorch☆19Updated 2 years ago
- ☆32Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆111Updated last year
- A tracing JIT for PyTorch☆17Updated 3 years ago
- Quantize transformers to any learned arbitrary 4-bit numeric format☆48Updated 2 months ago
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated last week
- TensorRT LLM Benchmark Configuration☆13Updated last year
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated 2 years ago
- High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆116Updated last year
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
- Awesome Triton Resources☆36Updated 5 months ago
- 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆17Updated last year