ptillet / triton-llvm-releasesLinks
☆22Updated 2 years ago
Alternatives and similar repositories for triton-llvm-releases
Users that are interested in triton-llvm-releases are comparing it to the libraries listed below
Sorting:
- ☆50Updated last year
 - FlexAttention w/ FlashAttention3 Support☆27Updated last year
 - APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆25Updated last week
 - Benchmark tests supporting the TiledCUDA library.☆17Updated 11 months ago
 - GPTQ inference TVM kernel☆39Updated last year
 - CUDA 12.2 HMM demos☆20Updated last year
 - Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 4 years ago
 - TORCH_LOGS parser for PT2☆62Updated last month
 - TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆100Updated 4 months ago
 - Framework to reduce autotune overhead to zero for well known deployments.☆84Updated last month
 - TensorRT LLM Benchmark Configuration☆13Updated last year
 - ☆22Updated last year
 - A tracing JIT for PyTorch☆17Updated 3 years ago
 - ☆53Updated last week
 - PyTorch implementation of the Flash Spectral Transform Unit.☆18Updated last year
 - GEMM and Winograd based convolutions using CUTLASS☆28Updated 5 years ago
 - TiledKernel is a code generation library based on macro kernels and memory hierarchy graph data structure.☆19Updated last year
 - Awesome Triton Resources☆36Updated 6 months ago
 - FractalTensor is a programming framework that introduces a novel approach to organizing data in deep neural networks (DNNs) as a list of …☆29Updated 10 months ago
 - ☆71Updated 7 months ago
 - ☆16Updated last year
 - ☆93Updated 11 months ago
 - Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆65Updated 3 years ago
 - ☆102Updated 5 months ago
 - High-speed GEMV kernels, at most 2.7x speedup compared to pytorch baseline.☆120Updated last year
 - Prototype routines for GPU quantization written using PyTorch.☆21Updated 2 months ago
 - An experimental CPU backend for Triton (https//github.com/openai/triton)☆47Updated 2 months ago
 - Quantize transformers to any learned arbitrary 4-bit numeric format☆48Updated 3 months ago
 - 方便扩展的Cuda算子理解和优化框架,仅用在学习使用☆18Updated last year
 - Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Updated this week