onnx / turnkeymlLinks
No-code CLI designed for accelerating ONNX workflows
☆216Updated 5 months ago
Alternatives and similar repositories for turnkeyml
Users that are interested in turnkeyml are comparing it to the libraries listed below
Sorting:
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆549Updated this week
- AMD related optimizations for transformer models☆95Updated last month
- AI Tensor Engine for ROCm☆298Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆85Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆689Updated last week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆238Updated last week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆371Updated this week
- ☆126Updated last week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated last week
- Use safetensors with ONNX 🤗☆73Updated last month
- Run LLM Agents on Ryzen AI PCs in Minutes☆715Updated 2 weeks ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆114Updated this week
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆570Updated 2 weeks ago
- Development repository for the Triton language and compiler☆137Updated this week
- AMD's graph optimization engine.☆263Updated this week
- MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.☆57Updated last month
- OpenAI Triton backend for Intel® GPUs☆218Updated this week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆701Updated this week
- High-speed and easy-use LLM serving framework for local deployment☆132Updated 3 months ago
- Intel® NPU Acceleration Library☆694Updated 6 months ago
- Intel® AI Assistant Builder☆121Updated 2 weeks ago
- Digest AI is a powerful model analysis tool that extracts insights from your models.☆35Updated 5 months ago
- ☆468Updated this week
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆507Updated this week
- Generative AI extensions for onnxruntime☆878Updated this week
- Repository of model demos using TT-Buda☆63Updated 7 months ago
- ☆168Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 4 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆408Updated this week