hidet-org / hidetLinks
An open-source efficient deep learning framework/compiler, written in python.
☆698Updated this week
Alternatives and similar repositories for hidet
Users that are interested in hidet are comparing it to the libraries listed below
Sorting:
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,046Updated last year
- A library to analyze PyTorch traces.☆379Updated this week
- Pipeline Parallelism for PyTorch☆766Updated 9 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆544Updated this week
- ☆309Updated last week
- Fast low-bit matmul kernels in Triton☆303Updated this week
- ☆167Updated 11 months ago
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 10 months ago
- The Tensor Algebra SuperOptimizer for Deep Learning☆714Updated 2 years ago
- ☆210Updated this week
- Applied AI experiments and examples for PyTorch☆270Updated this week
- Backward compatible ML compute opset inspired by HLO/MHLO☆483Updated this week
- Shared Middle-Layer for Triton Compilation☆250Updated last week
- Implementation of a Transformer, but completely in Triton☆265Updated 3 years ago
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆329Updated this week
- ☆206Updated 10 months ago
- A tensor-aware point-to-point communication primitive for machine learning☆257Updated 2 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆619Updated 3 weeks ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆850Updated this week
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆866Updated last week
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆986Updated 8 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆359Updated 8 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆827Updated 8 months ago
- Flash Attention in ~100 lines of CUDA (forward pass only)☆827Updated 5 months ago
- TorchX is a universal job launcher for PyTorch applications. TorchX is designed to have fast iteration time for training/research and sup…☆366Updated last week
- Microsoft Automatic Mixed Precision Library☆602Updated 8 months ago
- PyTorch emulation library for Microscaling (MX)-compatible data formats☆236Updated last month
- Experimental projects related to TensorRT☆105Updated this week
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,545Updated this week
- Collection of kernels written in Triton language☆123Updated last month