onnx / turnkeymlLinks
No-code CLI designed for accelerating ONNX workflows
☆214Updated 3 months ago
Alternatives and similar repositories for turnkeyml
Users that are interested in turnkeyml are comparing it to the libraries listed below
Sorting:
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆379Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆78Updated this week
- AI Tensor Engine for ROCm☆276Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆334Updated this week
- Lightweight Inference server for OpenVINO☆211Updated this week
- AMD related optimizations for transformer models☆88Updated 3 weeks ago
- An innovative library for efficient LLM inference via low-bit quantization☆348Updated last year
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆639Updated last month
- Use safetensors with ONNX 🤗☆69Updated 2 weeks ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆489Updated this week
- High-speed and easy-use LLM serving framework for local deployment☆118Updated last month
- Run LLM Agents on Ryzen AI PCs in Minutes☆575Updated this week
- ☆334Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆111Updated this week
- OpenVINO Tokenizers extension☆40Updated last week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆89Updated this week
- LLM inference in C/C++☆102Updated 3 weeks ago
- llama.cpp fork used by GPT4All☆56Updated 6 months ago
- Advanced Quantization Algorithm for LLMs and VLMs, with support for CPU, Intel GPU, CUDA and HPU.☆631Updated this week
- Generative AI extensions for onnxruntime☆825Updated this week
- A curated list of OpenVINO based AI projects☆154Updated 2 months ago
- Intel® NPU Acceleration Library☆689Updated 4 months ago
- ☆124Updated last week
- ☆120Updated last year
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆381Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆101Updated this week
- Development repository for the Triton language and compiler☆130Updated last week
- ☆166Updated this week
- Intel® AI Assistant Builder☆106Updated last week
- Fast and memory-efficient exact attention☆188Updated this week