hidet-org / hidet
An open-source efficient deep learning framework/compiler, written in python.
☆692Updated last month
Alternatives and similar repositories for hidet:
Users that are interested in hidet are comparing it to the libraries listed below
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,039Updated 11 months ago
- Pipeline Parallelism for PyTorch☆762Updated 7 months ago
- A library to analyze PyTorch traces.☆366Updated this week
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆529Updated last month
- Flash Attention in ~100 lines of CUDA (forward pass only)☆774Updated 3 months ago
- Backward compatible ML compute opset inspired by HLO/MHLO☆465Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆798Updated this week
- ☆295Updated this week
- Fast low-bit matmul kernels in Triton☆285Updated this week
- ☆163Updated 9 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆579Updated 2 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 8 months ago
- Implementation of a Transformer, but completely in Triton☆263Updated 3 years ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆314Updated this week
- ☆198Updated this week
- Shared Middle-Layer for Triton Compilation☆241Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,564Updated last year
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆794Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,491Updated this week
- Applied AI experiments and examples for PyTorch☆256Updated 3 weeks ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆840Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆788Updated 7 months ago
- Microsoft Automatic Mixed Precision Library☆590Updated 6 months ago
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆628Updated 4 months ago
- Experimental projects related to TensorRT☆95Updated this week
- Cataloging released Triton kernels.☆216Updated 3 months ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆705Updated 2 years ago
- ☆197Updated 9 months ago
- Several optimization methods of half-precision general matrix multiplication (HGEMM) using tensor core with WMMA API and MMA PTX instruct…☆383Updated 7 months ago
- ☆249Updated 8 months ago