sozercan / aikit
ποΈ Fine-tune, build, and deploy open-source LLMs easily!
β437Updated this week
Alternatives and similar repositories for aikit:
Users that are interested in aikit are comparing it to the libraries listed below
- a curated collection of models ready-to-use with LocalAIβ264Updated 9 months ago
- Open Weight, tool-calling LLMsβ151Updated 5 months ago
- 100% Local AGI with LocalAIβ451Updated 9 months ago
- 𧬠Helix is a private GenAI stack for building AI applications with declarative pipelines, knowledge (RAG), API bindings, and first-classβ¦β474Updated this week
- Jupyter Notebooks for Ollama integrationβ122Updated 2 months ago
- Efficient visual programming for AI language modelsβ353Updated 6 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.β181Updated last month
- πͺΆ Lightweight OpenAI drop-in replacement for Kubernetesβ144Updated last year
- [deprecated] AI Gateway - core infrastructure stack for building production-ready AI Applicationsβ158Updated 11 months ago
- Helm chart for Ollama on Kubernetesβ398Updated last week
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.β115Updated 10 months ago
- Inference engine powering open source models on OpenRouterβ802Updated 2 months ago
- π’ Yet another operator for running large language models on Kubernetes with ease. Powered by Ollama! π«β175Updated this week
- β101Updated 11 months ago
- OpenAI compatible API for LLMs and embeddings (LLaMA, Vicuna, ChatGLM and many others)β275Updated last year
- π Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.β346Updated 3 months ago
- Midori AI's Mono Repo! Check out our site below!β118Updated this week
- LLMX; Easiest 3rd party Local LLM UI for the web!β229Updated last month
- Open source alternative to Perplexity AI with ability to run locallyβ198Updated 5 months ago
- LM Studio JSON configuration file format and a collection of example config files.β194Updated 7 months ago
- LM inference server implementation based on *.cpp.β149Updated this week
- A simple to use Ollama autocompletion engine with options exposed and streaming functionalityβ121Updated 5 months ago
- β148Updated last week
- Run LLMs in the Browser with MLC / WebLLM β¨β124Updated 5 months ago
- We introduced a new model designed for the Code generation task. Its test accuracy on the HumanEval base dataset surpasses that of GPT-4 β¦β839Updated 8 months ago
- Building open version of OpenAI o1 via reasoning traces (Groq, ollama, Anthropic, Gemini, OpenAI, Azure supported) Demo: https://huggingβ¦β175Updated 5 months ago
- Code execution utilities for Open WebUI & Ollamaβ264Updated 4 months ago
- Stateful load balancer custom-tailored for llama.cpp ππ¦β732Updated last week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.β135Updated 2 weeks ago
- AI for all: Build the large graph of the language modelsβ263Updated 9 months ago