onnx / turnkeymlLinks
No-code CLI designed for accelerating ONNX workflows
☆214Updated 3 months ago
Alternatives and similar repositories for turnkeyml
Users that are interested in turnkeyml are comparing it to the libraries listed below
Sorting:
- AI Tensor Engine for ROCm☆284Updated this week
- AMD related optimizations for transformer models☆90Updated last month
- Use safetensors with ONNX 🤗☆69Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆438Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆211Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆111Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆345Updated last week
- High-speed and easy-use LLM serving framework for local deployment☆122Updated 2 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆90Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆81Updated this week
- ☆127Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆653Updated last month
- MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.☆51Updated 2 months ago
- ☆388Updated this week
- Repository of model demos using TT-Buda☆62Updated 6 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆498Updated this week
- Run LLM Agents on Ryzen AI PCs in Minutes☆639Updated last week
- Development repository for the Triton language and compiler☆133Updated this week
- ☆120Updated last year
- llama.cpp fork used by GPT4All☆56Updated 7 months ago
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆424Updated this week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆651Updated this week
- OpenVINO Intel NPU Compiler☆71Updated this week
- AMD's graph optimization engine.☆253Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆103Updated this week
- Digest AI is a powerful model analysis tool that extracts insights from your models.☆32Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆265Updated 11 months ago
- OpenVINO Tokenizers extension☆42Updated last week
- A minimalistic C++ Jinja templating engine for LLM chat templates☆187Updated 2 weeks ago