Talnz007 / VulkanIlmLinks
GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).
☆21Updated last month
Alternatives and similar repositories for VulkanIlm
Users that are interested in VulkanIlm are comparing it to the libraries listed below
Sorting:
- A simple tool to anonymize LLM prompts.☆65Updated 7 months ago
- ☆209Updated 2 weeks ago
- A web application that converts speech to speech 100% private☆76Updated 3 months ago
- A lightweight LLaMA.cpp HTTP server Docker image based on Alpine Linux.☆29Updated 3 weeks ago
- A document based RAG application☆128Updated 5 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated last week
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆90Updated this week
- A simple, easy-to-customize pipeline for local RAG evaluation. Starter prompts and metric definitions included.☆23Updated 2 weeks ago
- A Field-Theoretic Approach to Unbounded Memory in Large Language Models☆20Updated 5 months ago
- Powerful LLM Query Framework with YAML Prompt Templates. Made for Automation☆32Updated this week
- Lightweight cli coding agent☆57Updated 4 months ago
- RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for eng…☆82Updated 2 months ago
- Editor with LLM generation tree exploration☆76Updated 7 months ago
- ☆91Updated last year
- ☆60Updated last year
- Chat WebUI is an easy-to-use user interface for interacting with AI, and it comes with multiple useful built-in tools such as web search …☆45Updated 3 weeks ago
- Code scanner to check for issues in prompts and LLM calls☆73Updated 5 months ago
- Lightweight Inference server for OpenVINO☆211Updated this week
- *NIX SHELL with Local AI/LLM integration☆23Updated 6 months ago
- ☆83Updated 6 months ago
- LocalScore is an open benchmark which helps you understand how well your computer can handle local AI tasks.☆59Updated 2 weeks ago
- ☆17Updated 2 months ago
- An MCP server that queries public SearXNG instances, parsing HTML contents into a JSON result☆19Updated 3 weeks ago
- A platform to self-host AI on easy mode☆167Updated last week
- Documentation site for fast-agent☆19Updated this week
- 33B Chinese LLM, DPO QLORA, 100K context, AirLLM 70B inference with single 4GB GPU☆13Updated last year
- No-messing-around sh client for llama.cpp's server☆30Updated last year
- Generate a wiki for your research topic, sourcing from the web and your docs.☆52Updated 6 months ago
- Let LLMs control embedded devices via the Model Context Protocol.☆147Updated 2 months ago
- A simple chrome extension to interact directly with LLMs and Ollama from any tab on your browser☆44Updated 2 weeks ago