hidet-org / hidet
An open-source efficient deep learning framework/compiler, written in python.
☆681Updated last week
Alternatives and similar repositories for hidet:
Users that are interested in hidet are comparing it to the libraries listed below
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,029Updated 10 months ago
- A library to analyze PyTorch traces.☆332Updated last week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆303Updated this week
- Applied AI experiments and examples for PyTorch☆225Updated this week
- Pipeline Parallelism for PyTorch☆749Updated 5 months ago
- Backward compatible ML compute opset inspired by HLO/MHLO☆446Updated last week
- Implementation of a Transformer, but completely in Triton☆257Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆221Updated 6 months ago
- Fast low-bit matmul kernels in Triton☆236Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆514Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆746Updated this week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆699Updated last month
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,424Updated this week
- ☆284Updated last week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆523Updated last week
- ☆159Updated 8 months ago
- ☆181Updated 7 months ago
- A library of GPU kernels for sparse matrix operations.☆255Updated 4 years ago
- Shared Middle-Layer for Triton Compilation☆226Updated this week
- ☆179Updated last week
- Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators☆349Updated this week
- The Tensor Algebra SuperOptimizer for Deep Learning☆696Updated 2 years ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆812Updated this week
- ☆399Updated 4 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆729Updated 5 months ago
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆496Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,556Updated last year
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆175Updated 10 months ago
- Microsoft Automatic Mixed Precision Library☆567Updated 4 months ago
- ☆201Updated 2 months ago