leaningtech / headscaleLinks
An open source, self-hosted implementation of the Tailscale control server
☆25Updated 9 months ago
Alternatives and similar repositories for headscale
Users that are interested in headscale are comparing it to the libraries listed below
Sorting:
- Run X86 binary applications and libraries in the browser☆84Updated 2 weeks ago
- A lightweight LLaMA.cpp HTTP server Docker image based on Alpine Linux.☆29Updated 4 months ago
- LocalScore is an open benchmark which helps you understand how well your computer can handle local AI tasks.☆83Updated last week
- In this repository, 16 models compete to outperform each other in the game Town of Salem. Each model is randomly assigned roles like Vamp…☆42Updated 7 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆88Updated this week
- Kroko ASR - Speech-to-text☆130Updated 3 months ago
- Download models from the Ollama library, without Ollama☆122Updated last year
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssembly☆270Updated 8 months ago
- ☆209Updated last month
- Create 3D files in the CLI with Small Language Model☆43Updated 3 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆282Updated last month
- Aggregates compute from spare GPU capacity☆190Updated this week
- ☆51Updated 4 months ago
- Discover Exceptional MCP Servers☆286Updated 3 weeks ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆133Updated last week
- Code execution utilities for Open WebUI & Ollama☆318Updated last year
- Small Language Model Inference, Fine-Tuning and Observability. No GPU, no labeled data needed.☆82Updated last week
- Nginx proxy server in a Docker container to Authenticate & Proxy requests to Ollama from Public Internet via Cloudflare Tunnel☆158Updated 5 months ago
- 100% Local Memory layer and Knowledge base for agents with WebUI☆728Updated last week
- AI Server☆111Updated 8 months ago
- Wraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm…☆146Updated 3 months ago
- RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for eng…☆84Updated 6 months ago
- InferX: Inference as a Service Platform☆154Updated this week
- This repo contains all the code necessary to build the docker images for the browser and desktop sandbox☆19Updated 2 months ago
- A battery-included, POSIX-compatible, generative shell☆374Updated this week
- ☆15Updated 10 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated last year
- An MCP client for Node.js.☆106Updated 5 months ago
- GPU-accelerated LLaMA inference wrapper for legacy Vulkan-capable systems a Pythonic way to run AI with knowledge (Ilm) on fire (Vulkan).☆28Updated 3 months ago
- MinimalChat is a lightweight, open-source chat application that allows you to interact with various large language models.☆269Updated 11 months ago