tenstorrent / tt-torchLinks
Frontend integration for PyTorch with tt-mlir
☆23Updated this week
Alternatives and similar repositories for tt-torch
Users that are interested in tt-torch are comparing it to the libraries listed below
Sorting:
- Tenstorrent MLIR compiler☆169Updated this week
- Tenstorrent TT-BUDA Repository☆315Updated 4 months ago
- ⭐️ TTNN Compiler for PyTorch 2 ⭐️ Enables running PyTorch models on Tenstorrent hardware using eager or compile path☆53Updated this week
- The TT-Forge FE is a graph compiler designed to optimize and transform computational graphs for deep learning models, enhancing their per…☆48Updated this week
- Tenstorrent's MLIR Based Compiler. We aim to enable developers to run AI on all configurations of Tenstorrent hardware, through an open-s…☆99Updated this week
- Tenstorrent Kernel Module☆50Updated this week
- ☆53Updated this week
- TT-NN operator library, and TT-Metalium low level kernel programming model.☆1,069Updated this week
- GPUOcelot: A dynamic compilation framework for PTX☆207Updated 6 months ago
- IREE's PyTorch Frontend, based on Torch Dynamo.☆94Updated this week
- ☆148Updated this week
- Python interface for MLIR - the Multi-Level Intermediate Representation☆263Updated 8 months ago
- An experimental CPU backend for Triton☆139Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆20Updated this week
- Backward compatible ML compute opset inspired by HLO/MHLO☆517Updated this week
- A Fusion Code Generator for NVIDIA GPUs (commonly known as "nvFuser")☆346Updated this week
- OpenAI Triton backend for Intel® GPUs☆198Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆111Updated this week
- Attention in SRAM on Tenstorrent Grayskull☆38Updated last year
- Intel® Extension for MLIR. A staging ground for MLIR dialects and tools for Intel devices using the MLIR toolchain.☆139Updated last week
- TVM for Tenstorrent ASICs☆25Updated this week
- ctypes wrappers for HIP, CUDA, and OpenCL☆130Updated last year
- TritonParse: A Compiler Tracer, Visualizer, and mini-Reproducer(WIP) for Triton Kernels☆139Updated this week
- Repo for AI Compiler team. The intended purpose of this repo is for implementation of a PJRT device.☆19Updated this week
- Nvidia Instruction Set Specification Generator☆286Updated last year
- Shared Middle-Layer for Triton Compilation☆264Updated this week
- An experimental CPU backend for Triton (https//github.com/openai/triton)☆44Updated 4 months ago
- A lightweight, Pythonic, frontend for MLIR☆80Updated last year
- Evaluating Large Language Models for CUDA Code Generation ComputeEval is a framework designed to generate and evaluate CUDA code from Lar…☆58Updated last month
- Development repository for the Triton language and compiler☆127Updated this week