LLM Frontend in a single html file
☆717Dec 27, 2025Updated 3 months ago
Alternatives and similar repositories for mikupad
Users that are interested in mikupad are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Serverless single HTML page access to an OpenAI API compatible Local LLM☆45Sep 9, 2025Updated 7 months ago
- No-messing-around sh client for llama.cpp's server☆30Aug 7, 2024Updated last year
- A zero dependency web UI for any LLM backend, including KoboldCpp, OpenAI and AI Horde☆172Updated this week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,175Apr 12, 2026Updated last week
- A frontend for creative writing with LLMs☆157Jul 15, 2024Updated last year
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- ☆20Jul 4, 2025Updated 9 months ago
- Large-scale LLM inference engine☆1,695Mar 12, 2026Updated last month
- Efficient visual programming for AI language models☆361May 13, 2025Updated 11 months ago
- Web UI for ExLlamaV2☆511Feb 5, 2025Updated last year
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆10,074Apr 12, 2026Updated last week
- A multimodal, function calling powered LLM webui.☆214Sep 23, 2024Updated last year
- ☆94Mar 28, 2026Updated 3 weeks ago
- Easily view and modify JSON datasets for large language models☆87May 16, 2025Updated 11 months ago
- a character-ai like UI for LLM☆10Dec 3, 2024Updated last year
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Clipboard Conqueror is a novel copy and paste copilot alternative designed to bring your very own LLM AI assistant to any text field.☆442Jan 11, 2025Updated last year
- Native gui to serveral AI services plus llama.cpp local AIs.☆115Jan 14, 2024Updated 2 years ago
- ☆342Mar 5, 2026Updated last month
- Writing Extension for Text Generation WebUI☆67Aug 7, 2025Updated 8 months ago
- A local-first LLM development studio. Build, test, and customize inference workflows with your own models — no cloud, totally local.☆17May 21, 2025Updated 10 months ago
- One command brings a complete pre-wired LLM stack with hundreds of services to explore.☆2,831Apr 13, 2026Updated last week
- A simple framework for using a local Koboldcpp LLM to help with story-writing☆23Nov 26, 2023Updated 2 years ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆60Dec 1, 2024Updated last year
- A web-app to explore topics using LLM (less typing and more clicks)☆68Mar 15, 2026Updated last month
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Yet another frontend for LLM, written using .NET and WinUI 3☆11Sep 14, 2025Updated 7 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆167Updated this week
- WilmerAI is one of the oldest LLM semantic routers. It uses multi-layer prompt routing and complex workflows to allow you to not only cre…☆809Updated this week
- AI management tool☆121Nov 9, 2024Updated last year
- Text WebUI extension to add clever Notebooks to Chat mode☆147Aug 7, 2025Updated 8 months ago
- ☆12Jan 19, 2024Updated 2 years ago
- A conversational UI for chatbots using the llama.cpp server☆14May 26, 2025Updated 10 months ago
- Create Custom LLMs☆1,828Nov 8, 2025Updated 5 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆72Sep 10, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Cleanai (https://github.com/willmil11/cleanai) except I'm making it in c now. Fast and clean from the start this time :)☆17Mar 6, 2026Updated last month
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,497Mar 4, 2026Updated last month
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆767Updated this week
- Simple frontend for LLMs built in react-native.☆2,300Updated this week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆3,424Updated this week
- Prompt Jinja2 templates for LLMs☆35Jul 9, 2025Updated 9 months ago
- klmbr - a prompt pre-processing technique to break through the barrier of entropy while generating text with LLMs☆87Sep 22, 2024Updated last year