onnx / turnkeymlLinks
No-code CLI designed for accelerating ONNX workflows
☆207Updated last month
Alternatives and similar repositories for turnkeyml
Users that are interested in turnkeyml are comparing it to the libraries listed below
Sorting:
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆269Updated this week
- AI Tensor Engine for ROCm☆243Updated this week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆381Updated this week
- AMD related optimizations for transformer models☆81Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 11 months ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆583Updated 2 weeks ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆111Updated last week
- Lightweight Inference server for OpenVINO☆191Updated 2 weeks ago
- Use safetensors with ONNX 🤗☆69Updated last month
- Run LLM Agents on Ryzen AI PCs in Minutes☆485Updated last month
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆316Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆73Updated last week
- High-speed and easy-use LLM serving framework for local deployment☆115Updated 4 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated last week
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU. Seamlessly integrated with Torchao, Tra…☆564Updated last week
- Development repository for the Triton language and compiler☆127Updated this week
- Repository of model demos using TT-Buda☆62Updated 4 months ago
- ☆115Updated last week
- llama.cpp fork used by GPT4All☆56Updated 5 months ago
- OpenAI Triton backend for Intel® GPUs☆197Updated this week
- Fast and memory-efficient exact attention☆179Updated last week
- Digest AI is a powerful model analysis tool that extracts insights from your models.☆29Updated 2 months ago
- ☆290Updated this week
- Intel® NPU Acceleration Library☆682Updated 3 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆964Updated last week
- Generative AI extensions for onnxruntime☆783Updated this week
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆648Updated 3 months ago
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆369Updated this week
- A curated list of OpenVINO based AI projects☆146Updated last month
- ☆120Updated last year