onnx / turnkeymlLinks
No-code CLI designed for accelerating ONNX workflows
☆219Updated 5 months ago
Alternatives and similar repositories for turnkeyml
Users that are interested in turnkeyml are comparing it to the libraries listed below
Sorting:
- AMD related optimizations for transformer models☆96Updated last month
- AI Tensor Engine for ROCm☆311Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆260Updated last week
- MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.☆62Updated 3 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆381Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆613Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆698Updated 2 weeks ago
- ☆128Updated last week
- Advanced quantization toolkit for LLMs and VLMs. Native support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Schemes and seamless integration wi…☆753Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆87Updated last week
- High-speed and easy-use LLM serving framework for local deployment☆137Updated 4 months ago
- Intel® AI Assistant Builder☆131Updated last week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆114Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- Use safetensors with ONNX 🤗☆76Updated 2 months ago
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆579Updated 2 weeks ago
- LLM training in simple, raw C/HIP for AMD GPUs☆55Updated last year
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆515Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆63Updated 5 months ago
- OpenVINO Tokenizers extension☆44Updated last week
- A curated list of OpenVINO based AI projects☆172Updated 5 months ago
- OpenAI Triton backend for Intel® GPUs☆221Updated last week
- Sparse Inferencing for transformer based LLMs☆215Updated 3 months ago
- Generative AI extensions for onnxruntime☆901Updated this week
- Run LLM Agents on Ryzen AI PCs in Minutes☆792Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆214Updated last week
- LLM inference in C/C++☆103Updated this week
- ☆533Updated this week
- AMD's graph optimization engine.☆266Updated last week