hidet-org / hidetLinks
An open-source efficient deep learning framework/compiler, written in python.
☆703Updated this week
Alternatives and similar repositories for hidet
Users that are interested in hidet are comparing it to the libraries listed below
Sorting:
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,051Updated last year
- A library to analyze PyTorch traces.☆387Updated last week
- Pipeline Parallelism for PyTorch☆768Updated 10 months ago
- A subset of PyTorch's neural network modules, written in Python using OpenAI's Triton.☆554Updated this week
- ☆317Updated this week
- Fast low-bit matmul kernels in Triton☆322Updated this week
- Backward compatible ML compute opset inspired by HLO/MHLO☆490Updated this week
- ☆168Updated last year
- Implementation of a Transformer, but completely in Triton☆268Updated 3 years ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 10 months ago
- Representation and Reference Lowering of ONNX Models in MLIR Compiler Infrastructure☆873Updated last week
- The Tensor Algebra SuperOptimizer for Deep Learning☆715Updated 2 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,571Updated last year
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆628Updated last month
- Shared Middle-Layer for Triton Compilation☆255Updated this week
- Microsoft Automatic Mixed Precision Library☆609Updated 8 months ago
- depyf is a tool to help you understand and adapt to PyTorch compiler torch.compile.☆689Updated 2 months ago
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆868Updated this week
- ☆219Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆337Updated this week
- ☆418Updated 8 months ago
- A flexible and efficient deep neural network (DNN) compiler that generates high-performance executable from a DNN model description.☆989Updated 9 months ago
- The Torch-MLIR project aims to provide first class support from the PyTorch ecosystem to the MLIR ecosystem.☆1,562Updated last week
- Flash Attention in ~100 lines of CUDA (forward pass only)☆845Updated 5 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆358Updated this week
- ☆212Updated 11 months ago
- Fastest kernels written from scratch☆281Updated 2 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆845Updated 9 months ago
- A CPU+GPU Profiling library that provides access to timeline traces and hardware performance counters.☆821Updated this week
- A Easy-to-understand TensorOp Matmul Tutorial☆363Updated 9 months ago