onnx / turnkeymlLinks
No-code CLI designed for accelerating ONNX workflows
☆201Updated last month
Alternatives and similar repositories for turnkeyml
Users that are interested in turnkeyml are comparing it to the libraries listed below
Sorting:
- Local LLM Server with GPU and NPU Acceleration☆184Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆222Updated this week
- AI Tensor Engine for ROCm☆226Updated this week
- Lightweight Inference server for OpenVINO☆187Updated last week
- AMD related optimizations for transformer models☆80Updated 3 weeks ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 10 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆439Updated 2 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆303Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- High-speed and easy-use LLM serving framework for local deployment☆112Updated 3 months ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆555Updated last week
- Use safetensors with ONNX 🤗☆67Updated 2 weeks ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆109Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆68Updated 2 weeks ago
- Repository of model demos using TT-Buda☆62Updated 3 months ago
- LLM training in simple, raw C/HIP for AMD GPUs☆50Updated 9 months ago
- ☆161Updated last week
- ☆111Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆264Updated 9 months ago
- Intel® AI Assistant Builder☆87Updated 2 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆652Updated this week
- ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.☆362Updated this week
- llama.cpp fork used by GPT4All☆56Updated 4 months ago
- Development repository for the Triton language and compiler☆125Updated this week
- Fully Open Language Models with Stellar Performance☆234Updated last month
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆43Updated 9 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆477Updated this week
- Generative AI extensions for onnxruntime☆753Updated this week
- Inference server benchmarking tool☆79Updated 2 months ago
- Intel® NPU Acceleration Library☆680Updated 2 months ago