intel / intel-ai-assistant-builderLinks
Intel® AI Assistant Builder
☆128Updated this week
Alternatives and similar repositories for intel-ai-assistant-builder
Users that are interested in intel-ai-assistant-builder are comparing it to the libraries listed below
Sorting:
- No-code CLI designed for accelerating ONNX workflows☆216Updated 5 months ago
- MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.☆59Updated last week
- ☆78Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆247Updated 3 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆374Updated this week
- llama.cpp fork used by GPT4All☆55Updated 9 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆766Updated last week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆54Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆87Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆350Updated last year
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆695Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆580Updated this week
- ☆34Updated last week
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆665Updated 2 weeks ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆179Updated this week
- Phi4 Multimodal Instruct - OpenAI endpoint and Docker Image for self-hosting☆40Updated 8 months ago
- ☆107Updated 3 months ago
- OpenVINO Tokenizers extension☆43Updated this week
- This reference can be used with any existing OpenAI integrated apps to run with TRT-LLM inference locally on GeForce GPU on Windows inste…☆127Updated last year
- Docker compose to run vLLM on Windows☆107Updated last year
- AI Studio is an independent app for utilizing LLMs.☆340Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆216Updated 3 months ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆112Updated last week
- Benchmark llm performance☆106Updated last year
- AMD related optimizations for transformer models☆96Updated last month
- On-device LLM Inference Powered by X-Bit Quantization☆273Updated last week
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆54Updated 3 months ago
- ☆140Updated last month
- Unsloth Studio☆118Updated 7 months ago