intel / intel-ai-assistant-builderLinks
Intel® AI Assistant Builder
☆111Updated this week
Alternatives and similar repositories for intel-ai-assistant-builder
Users that are interested in intel-ai-assistant-builder are comparing it to the libraries listed below
Sorting:
- No-code CLI designed for accelerating ONNX workflows☆214Updated 4 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆213Updated last week
- MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.☆51Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆649Updated last week
- A curated list of OpenVINO based AI projects☆162Updated 3 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆359Updated this week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆52Updated last week
- ☆49Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆450Updated last week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆82Updated last week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆665Updated this week
- OpenVINO Tokenizers extension☆42Updated last week
- ☆149Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- ☆135Updated 2 weeks ago
- ☆102Updated last month
- llama.cpp fork used by GPT4All☆57Updated 8 months ago
- This repo contains documents of the OPEA project☆44Updated last month
- ☆33Updated this week
- GenAI components at micro-service level; GenAI service composer to create mega-service☆178Updated last week
- InferX: Inference as a Service Platform☆136Updated last week
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆35Updated this week
- LLM training in simple, raw C/HIP for AMD GPUs☆51Updated last year
- An Awesome list of oneAPI projects☆151Updated 2 months ago
- LM inference server implementation based on *.cpp.☆281Updated 2 months ago
- ☆64Updated last year
- All-in-Storage Solution based on DiskANN for DRAM-free Approximate Nearest Neighbor Search☆80Updated 3 months ago
- Ampere optimized llama.cpp☆27Updated last week
- ☆413Updated last week
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆504Updated this week