intel / intel-ai-assistant-builderLinks
Intel® AI Assistant Builder
☆140Updated this week
Alternatives and similar repositories for intel-ai-assistant-builder
Users that are interested in intel-ai-assistant-builder are comparing it to the libraries listed below
Sorting:
- No-code CLI designed for accelerating ONNX workflows☆221Updated 7 months ago
- MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.☆67Updated last month
- OpenVINO Tokenizers extension☆45Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆270Updated last week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆58Updated this week
- ☆115Updated this week
- A curated list of OpenVINO based AI projects☆177Updated 6 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆410Updated this week
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- AMD related optimizations for transformer models☆96Updated 2 months ago
- ☆36Updated this week
- ☆108Updated 4 months ago
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆62Updated 9 months ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆180Updated last month
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆223Updated this week
- ☆147Updated 3 weeks ago
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆89Updated 3 weeks ago
- InferX: Inference as a Service Platform☆146Updated last week
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆41Updated 6 months ago
- Ampere optimized llama.cpp☆28Updated 2 weeks ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆690Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆843Updated last week
- Large Language Model Text Generation Inference on Habana Gaudi☆34Updated 9 months ago
- Intel® NPU Acceleration Library☆700Updated 8 months ago
- Aggregates compute from spare GPU capacity☆183Updated last week
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆40Updated this week
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆127Updated last year
- AI Tensor Engine for ROCm☆330Updated last week
- llama.cpp fork used by GPT4All☆55Updated 10 months ago
- 🤗 Optimum Intel: Accelerate inference with Intel optimization tools☆527Updated this week