onnx / turnkeymlLinks
No-code CLI designed for accelerating ONNX workflows
☆227Updated 7 months ago
Alternatives and similar repositories for turnkeyml
Users that are interested in turnkeyml are comparing it to the libraries listed below
Sorting:
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆92Updated last week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆295Updated this week
- ☆137Updated this week
- AI Tensor Engine for ROCm☆351Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆769Updated this week
- AMD related optimizations for transformer models☆97Updated 3 months ago
- An innovative library for efficient LLM inference via low-bit quantization☆352Updated last year
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆770Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆428Updated this week
- Use safetensors with ONNX 🤗☆87Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- LLM training in simple, raw C/HIP for AMD GPUs☆58Updated last year
- 🎯An accuracy-first, highly efficient quantization toolkit for LLMs, designed to minimize quality degradation across Weight-Only Quantiza…☆845Updated this week
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆590Updated 3 weeks ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆532Updated this week
- Digest AI is a powerful model analysis tool that extracts insights from your models.☆40Updated 8 months ago
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆113Updated this week
- ☆151Updated this week
- Build AI agents for your PC☆916Updated this week
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆64Updated 7 months ago
- Intel® AI Super Builder☆159Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆114Updated this week
- LLM inference in C/C++☆104Updated 2 weeks ago
- Fast and Furious AMD Kernels☆348Updated 2 weeks ago
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆42Updated this week
- Repository of model demos using TT-Buda☆63Updated 10 months ago
- OpenAI Triton backend for Intel® GPUs☆226Updated this week
- OpenVINO Tokenizers extension☆48Updated this week
- llama.cpp fork used by GPT4All☆55Updated 11 months ago
- High-speed and easy-use LLM serving framework for local deployment☆146Updated 6 months ago