leaningtech / headscaleLinks
An open source, self-hosted implementation of the Tailscale control server
☆24Updated 7 months ago
Alternatives and similar repositories for headscale
Users that are interested in headscale are comparing it to the libraries listed below
Sorting:
- Run X86 binary applications and libraries in the browser☆73Updated 3 months ago
- LocalScore is an open benchmark which helps you understand how well your computer can handle local AI tasks.☆65Updated 2 months ago
- A lightweight LLaMA.cpp HTTP server Docker image based on Alpine Linux.☆29Updated last month
- Phi4 Multimodal Instruct - OpenAI endpoint and Docker Image for self-hosting☆40Updated 8 months ago
- A platform to self-host AI on easy mode☆173Updated this week
- Code execution utilities for Open WebUI & Ollama☆304Updated last year
- whisper-cpp-serve Real-time speech recognition and c+ of OpenAI's Whisper model in C/C++☆71Updated last year
- ☆18Updated 8 months ago
- MCP for Proxmox integration in Cline☆168Updated 8 months ago
- AI Server☆108Updated 6 months ago
- ☆61Updated last week
- API up your Ollama Server.☆186Updated last month
- Create 3D files in the CLI with Small Language Model☆41Updated 3 weeks ago
- ☆207Updated 2 months ago
- ☆15Updated 8 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆277Updated 2 months ago
- Kroko ASR - Speech-to-text☆100Updated last month
- EntityDB is an in-browser vector database wrapping indexedDB and Transformers.js over WebAssembly☆234Updated 6 months ago
- A web-based calculator for estimating GPU memory requirements and maximum concurrent requests for self-hosted LLM inference.☆20Updated 2 months ago
- fast state-of-the-art speech models and a runtime that runs anywhere 💥☆57Updated 4 months ago
- Discover Exceptional MCP Servers☆269Updated 2 months ago
- Virtual environment in web browser.☆102Updated 2 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 9 months ago
- InferX: Inference as a Service Platform☆138Updated last week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated last week
- An MCP client for Node.js.☆101Updated 3 months ago
- Generate Your Own Private Morning Radio for Commute☆33Updated 9 months ago
- RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for eng…☆81Updated 3 months ago
- The specification for the Universal Tool Calling Protocol☆243Updated this week
- Let LLMs control embedded devices via the Model Context Protocol.☆146Updated 4 months ago