intel / intel-ai-super-builderLinks
Intel® AI Super Builder
☆151Updated this week
Alternatives and similar repositories for intel-ai-super-builder
Users that are interested in intel-ai-super-builder are comparing it to the libraries listed below
Sorting:
- No-code CLI designed for accelerating ONNX workflows☆224Updated 7 months ago
- ☆119Updated last week
- MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.☆67Updated 2 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆274Updated this week
- This repository contains Dockerfiles, scripts, yaml files, Helm charts, etc. used to scale out AI containers with versions of TensorFlow …☆59Updated this week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆414Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆91Updated last week
- This project benchmarks 41 open-source large language models across 19 evaluation tasks using the lm-evaluation-harness library.☆85Updated 4 months ago
- The NVIDIA RTX™ AI Toolkit is a suite of tools and SDKs for Windows developers to customize, optimize, and deploy AI models across RTX PC…☆181Updated last month
- ☆36Updated this week
- ☆148Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆351Updated last year
- For individual users, watsonx Code Assistant can access a local IBM Granite model☆37Updated 6 months ago
- Intel® NPU Acceleration Library☆701Updated 8 months ago
- GenAI components at micro-service level; GenAI service composer to create mega-service☆193Updated 2 weeks ago
- ☆180Updated last month
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆724Updated this week
- Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU a…☆42Updated last year
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆710Updated this week
- Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools fro…☆62Updated 9 months ago
- NextCoder: Robust Adaptation of Code LMs to Diverse Code Edits (ICML'25)☆37Updated 6 months ago
- GenAI Studio is a low code platform to enable users to construct, evaluate, and benchmark GenAI applications. The platform also provide c…☆55Updated last month
- InferX: Inference as a Service Platform☆146Updated this week
- Generate a llama-quantize command to copy the quantization parameters of any GGUF☆29Updated 5 months ago
- ☆108Updated 4 months ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆637Updated this week
- ☆135Updated this week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- ☆94Updated last year
- llama.cpp fork used by GPT4All☆55Updated 10 months ago