intel / intel-ai-assistant-builderLinks
Intel® AI Assistant Builder
☆117Updated last week
Alternatives and similar repositories for intel-ai-assistant-builder
Users that are interested in intel-ai-assistant-builder are comparing it to the libraries listed below
Sorting:
- No-code CLI designed for accelerating ONNX workflows☆216Updated 4 months ago
- MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.☆55Updated last month
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆236Updated last week
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- High-Performance Text Deduplication Toolkit☆59Updated 2 months ago
- ☆65Updated this week
- Run LLM Agents on Ryzen AI PCs in Minutes☆702Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated last year
- Phi4 Multimodal Instruct - OpenAI endpoint and Docker Image for self-hosting☆40Updated 8 months ago
- llama.cpp fork used by GPT4All☆57Updated 8 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆368Updated this week
- ☆105Updated 2 months ago
- OpenVINO Tokenizers extension☆42Updated 2 weeks ago
- Unsloth Studio☆116Updated 7 months ago
- LLM inference in C/C++☆103Updated last week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆214Updated 2 months ago
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆52Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆403Updated this week
- cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server a…☆41Updated 4 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆535Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆83Updated this week
- Intel® SHMEM - Device initiated shared memory based communication library☆28Updated last week
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆649Updated last week
- Developer kits reference setup scripts for various kinds of Intel platforms and GPUs☆38Updated last week
- API Server for Transformer Lab☆78Updated this week
- ☆40Updated 2 months ago
- ☆456Updated this week
- This project benchmarks 41 open-source large language models across 19 evaluation tasks using the lm-evaluation-harness library.☆78Updated 2 months ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆83Updated last week
- This is the Mixture-of-Agents (MoA) concept, adapted from the original work by TogetherAI. My version is tailored for local model usage a…☆118Updated last year