sozercan / aikit
🏗️ Fine-tune, build, and deploy open-source LLMs easily!
☆359Updated this week
Related projects: ⓘ
- a curated collection of models ready-to-use with LocalAI☆259Updated 2 months ago
- Open Weight, tool-calling LLMs☆146Updated last month
- A proxy server for multiple ollama instances with Key security☆247Updated 2 months ago
- Go manage your Ollama models☆371Updated this week
- 100% Local AGI with LocalAI☆383Updated 2 months ago
- ☆84Updated 5 months ago
- Multi-node production AI stack. Run the best of open source AI easily on your own servers. Create your own AI by fine-tuning open source …☆319Updated this week
- Manage GPU clusters for running LLMs☆264Updated this week
- Helm chart for Ollama on Kubernetes☆216Updated last week
- Private Open AI on Kubernetes☆298Updated this week
- Effortlessly run LLM backends, APIs, frontends, and services with one command.☆199Updated this week
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆109Updated 3 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆140Updated last month
- Efficient visual programming for AI language models☆288Updated last week
- 🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.☆226Updated 2 weeks ago
- Link you Ollama models to LM-Studio☆107Updated 2 months ago
- LLMX; Easiest 3rd party Local LLM UI for the web!☆144Updated last month
- ☆47Updated last week
- 🪶 Lightweight OpenAI drop-in replacement for Kubernetes☆142Updated 7 months ago
- ☆190Updated last week
- Gollama: Your offline conversational AI companion. An interactive tool for generating creative responses from various models, right in yo…☆91Updated 3 weeks ago
- Proxy that allows you to use ollama as a copilot like Github copilot☆285Updated 2 weeks ago
- [deprecated] AI Gateway - core infrastructure stack for building production-ready AI Applications☆150Updated 5 months ago
- ☆93Updated this week
- WebAssembly binding for llama.cpp - Enabling in-browser LLM inference☆342Updated 2 weeks ago
- multi1: create o1-like reasoning chains with multiple AI providers (and locally)☆64Updated this week
- Inference engine powering open source models on OpenRouter☆517Updated 2 months ago
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆990Updated this week
- Ollama Cloud is a Highly Scalable Cloud-native Stack for Ollama☆109Updated 6 months ago
- QA-Pilot is an interactive chat project that leverages online/local LLM for rapid understanding and navigation of GitHub code repository.☆157Updated last month