intel / AI-PlaygroundLinks
AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.
☆674Updated this week
Alternatives and similar repositories for AI-Playground
Users that are interested in AI-Playground are comparing it to the libraries listed below
Sorting:
- Run LLM Agents on Ryzen AI PCs in Minutes☆792Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆613Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆698Updated 2 weeks ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆260Updated last week
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆381Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆488Updated last week
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆718Updated 2 weeks ago
- A proven usable Stable diffusion webui project on Intel Arc GPU with DirectML☆73Updated 2 years ago
- Extension for Automatic1111's Stable Diffusion WebUI, using Microsoft DirectML to deliver high performance result on any Windows GPU.☆59Updated last year
- Stable Diffusion web UI☆338Updated last year
- 🎨ComfyUI standalone pack for Intel GPUs. | 英特尔显卡 ComfyUI 整合包☆28Updated 3 weeks ago
- Intel® AI Assistant Builder☆131Updated last week
- ROCm Library Files for gfx1103 and update with others arches based on AMD GPUs for use in Windows.☆704Updated 2 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,827Updated this week
- Intel® NPU Acceleration Library☆700Updated 7 months ago
- Accelerate local LLM inference and finetuning (LLaMA, Mistral, ChatGLM, Qwen, DeepSeek, Mixtral, Gemma, Phi, MiniCPM, Qwen-VL, MiniCPM-V,…☆67Updated 7 months ago
- Help shape the future of Project G-Assist☆203Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆117Updated last week
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆1,995Updated this week
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆338Updated this week
- add support on amd in zluda☆77Updated 4 months ago
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆167Updated 8 months ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆47Updated 11 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,358Updated this week
- The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Now ZLUDA enhanced for better AMD GPU p…☆700Updated last week
- A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui☆55Updated 2 years ago
- CUDA on AMD GPUs☆584Updated 3 months ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆129Updated this week
- A curated list of OpenVINO based AI projects☆172Updated 5 months ago