tenstorrent / tt-inference-serverLinks
☆34Updated this week
Alternatives and similar repositories for tt-inference-server
Users that are interested in tt-inference-server are comparing it to the libraries listed below
Sorting:
- The TT-Forge FE is a graph compiler designed to optimize and transform computational graphs for deep learning models, enhancing their per…☆51Updated this week
- Tenstorrent TT-BUDA Repository☆313Updated 7 months ago
- Tenstorrent MLIR compiler☆213Updated last week
- Tenstorrent's MLIR Based Compiler. We aim to enable developers to run AI on all configurations of Tenstorrent hardware, through an open-s…☆141Updated this week
- TT-Studio : An all-in-one platform to deploy and manage AI models optimized for Tenstorrent hardware with dedicated front-end demo applic…☆37Updated last week
- Attention in SRAM on Tenstorrent Grayskull☆39Updated last year
- An experimental CPU backend for Triton☆164Updated 2 weeks ago
- AI Tensor Engine for ROCm☆306Updated this week
- TVM for Tenstorrent ASICs☆27Updated 2 months ago
- Tenstorrent Kernel Module☆57Updated this week
- Efficient implementation of DeepSeek Ops (Blockwise FP8 GEMM, MoE, and MLA) for AMD Instinct MI300X☆72Updated last week
- Buda Compiler Backend for Tenstorrent devices☆30Updated 7 months ago
- QuickReduce is a performant all-reduce library designed for AMD ROCm that supports inline compression.☆35Updated 3 months ago
- AMD-SHARK Inference Modeling and Serving☆56Updated this week
- TT-NN operator library, and TT-Metalium low level kernel programming model.☆1,265Updated this week
- Repo for AI Compiler team. The intended purpose of this repo is for implementation of a PJRT device.☆44Updated this week
- ☆27Updated 8 months ago
- IREE's PyTorch Frontend, based on Torch Dynamo.☆101Updated this week
- Unofficial description of the CUDA assembly (SASS) instruction sets.☆161Updated 4 months ago
- OpenAI Triton backend for Intel® GPUs☆221Updated this week
- GPUOcelot: A dynamic compilation framework for PTX☆216Updated 9 months ago
- Tenstorrent console based hardware information program☆57Updated this week
- A comprehensive tool for visualizing and analyzing model execution, offering interactive graphs, memory plots, tensor details, buffer ove…☆40Updated last week
- KernelBench: Can LLMs Write GPU Kernels? - Benchmark with Torch -> CUDA (+ more DSLs)☆676Updated last week
- ☆126Updated last month
- Official Problem Sets / Reference Kernels for the GPU MODE Leaderboard!☆160Updated 2 weeks ago
- AMD RAD's multi-GPU Triton-based framework for seamless multi-GPU programming☆116Updated last week
- Shared Middle-Layer for Triton Compilation☆313Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMs☆26Updated this week
- ☆118Updated last week