ptillet / triton-llvm-releases
☆20Updated last year
Alternatives and similar repositories for triton-llvm-releases:
Users that are interested in triton-llvm-releases are comparing it to the libraries listed below
- Benchmark tests supporting the TiledCUDA library.☆12Updated 2 months ago
- ☆48Updated 10 months ago
- ☆11Updated 3 years ago
- FlexAttention w/ FlashAttention3 Support☆27Updated 3 months ago
- No-GIL Python environment featuring NVIDIA Deep Learning libraries.☆41Updated 2 months ago
- TileFusion is a highly efficient kernel template library designed to elevate the level of abstraction in CUDA C for processing tiles.☆43Updated this week
- CUDA 12.2 HMM demos☆19Updated 6 months ago
- GPTQ inference TVM kernel☆38Updated 9 months ago
- TensorRT LLM Benchmark Configuration☆12Updated 6 months ago
- An external memory allocator example for PyTorch.☆14Updated 3 years ago
- Memory Optimizations for Deep Learning (ICML 2023)☆62Updated 10 months ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- Code for Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture (accepted by PVLDB).The outdated wr…☆9Updated last year
- ☆22Updated last month
- APPy (Annotated Parallelism for Python) enables users to annotate loops and tensor expressions in Python with compiler directives akin to…☆23Updated last month
- IntLLaMA: A fast and light quantization solution for LLaMA☆18Updated last year
- Yet another Polyhedra Compiler for DeepLearning☆19Updated last year
- Fairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large …☆63Updated 2 years ago
- ☆22Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆18Updated 3 weeks ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆86Updated this week
- Framework to reduce autotune overhead to zero for well known deployments.☆59Updated this week
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆22Updated 7 months ago
- ☆15Updated 4 months ago
- Ahead of Time (AOT) Triton Math Library☆50Updated this week
- GEMM and Winograd based convolutions using CUTLASS☆26Updated 4 years ago
- ☆58Updated 8 months ago
- FP64 equivalent GEMM via Int8 Tensor Cores using the Ozaki scheme☆55Updated 4 months ago