tenstorrent / tt-inference-serverLinks
☆24Updated this week
Alternatives and similar repositories for tt-inference-server
Users that are interested in tt-inference-server are comparing it to the libraries listed below
Sorting:
- Tenstorrent's MLIR Based Compiler. We aim to enable developers to run AI on all configurations of Tenstorrent hardware, through an open-s…☆117Updated this week
- The TT-Forge FE is a graph compiler designed to optimize and transform computational graphs for deep learning models, enhancing their per…☆51Updated this week
- TT-Studio : An all-in-one platform to deploy and manage AI models optimized for Tenstorrent hardware with dedicated front-end demo applic…☆37Updated this week
- Tenstorrent TT-BUDA Repository☆316Updated 5 months ago
- Tenstorrent MLIR compiler☆185Updated this week
- Attention in SRAM on Tenstorrent Grayskull☆38Updated last year
- ☆64Updated last week
- Tenstorrent console based hardware information program☆54Updated this week
- Tenstorrent Firmware repository☆21Updated this week
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆33Updated 3 weeks ago
- TT-NN operator library, and TT-Metalium low level kernel programming model.☆1,210Updated this week
- Tenstorrent Kernel Module☆54Updated this week
- ⭐️ TTNN Compiler for PyTorch 2 ⭐️ Enables running PyTorch models on Tenstorrent hardware using eager or compile path☆57Updated this week
- ☆28Updated 6 months ago
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆69Updated last month
- A comprehensive tool for visualizing and analyzing model execution, offering interactive graphs, memory plots, tensor details, buffer ove…☆39Updated this week
- Buda Compiler Backend for Tenstorrent devices☆30Updated 5 months ago
- TVM for Tenstorrent ASICs☆26Updated 2 weeks ago
- An experimental CPU backend for Triton☆153Updated 3 months ago
- IREE plugin repository for the AMD AIE accelerator☆103Updated last week
- Nvidia Instruction Set Specification Generator☆292Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆22Updated this week
- Repository of model demos using TT-Buda☆62Updated 5 months ago
- An MLIR-based toolchain for AMD AI Engine-enabled devices.☆481Updated last week
- GPUOcelot: A dynamic compilation framework for PTX☆208Updated 7 months ago
- AI Tensor Engine for ROCm☆279Updated this week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA problems☆573Updated last week
- Unofficial description of the CUDA assembly (SASS) instruction sets.☆144Updated 2 months ago
- Repo for AI Compiler team. The intended purpose of this repo is for implementation of a PJRT device.☆27Updated this week
- ☆70Updated 7 months ago