itlackey / ipex-arc-fastchat
☆52Updated last year
Alternatives and similar repositories for ipex-arc-fastchat:
Users that are interested in ipex-arc-fastchat are comparing it to the libraries listed below
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆25Updated 5 months ago
- Lightweight Inference server for OpenVINO☆143Updated this week
- ☆191Updated 2 weeks ago
- Export and Backup Ollama models into GGUF and ModelFile☆63Updated 6 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆64Updated 5 months ago
- GPU Power and Performance Manager☆57Updated 5 months ago
- A manual for helping using tesla p40 gpu☆121Updated 4 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆200Updated last month
- LLM SDK for OnnxRuntime GenAI (OGA)☆119Updated this week
- AI powered Chatbot with real time updates.☆50Updated 5 months ago
- Welcome to echonotes! This is an exciting and powerful Python application designed to automate the process of extracting handwritten note…☆47Updated 6 months ago
- Benchmark your local LLMs.☆45Updated 7 months ago
- ☆27Updated last month
- LIVA - Local Intelligent Voice Assistant☆61Updated 7 months ago
- AI Tensor Engine for ROCm☆142Updated this week
- A simple GUI for configuring traefik routes☆90Updated last month
- LLM Chat is an open-source serverless alternative to ChatGPT.☆33Updated 6 months ago
- Open WebUI, ComfyUI, n8n, LocalAI, LLM Proxy, SearXNG, Qdrant, Postgres all in docker compose☆49Updated 5 months ago
- Intel® NPU Acceleration Library☆653Updated 2 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆204Updated last month
- Handy tool to measure the performance and efficiency of LLMs workloads.☆52Updated 2 months ago
- Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion and Open WebUI, for image generation and interaction with Large Language …☆17Updated last week
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- Linux distro for AI computers☆159Updated last week
- One-click install internet appliances that operate on your terms. Transform your home computer into a sovereign and secure cloud.☆141Updated 5 months ago
- Host the GPTQ model using AutoGPTQ as an API that is compatible with text generation UI API.☆91Updated last year
- transparent proxy server on demand model swapping for llama.cpp (or any local OpenAPI compatible server)☆482Updated last week
- ☆103Updated 2 weeks ago
- Simple ollama benchmarking tool.☆98Updated last month
- Code execution utilities for Open WebUI & Ollama☆264Updated 4 months ago