containers / ramalamaLinks
RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of containers.
☆2,357Updated last week
Alternatives and similar repositories for ramalama
Users that are interested in ramalama are comparing it to the libraries listed below
Sorting:
- Boot and upgrade via container images☆1,705Updated last week
- InstructLab Core package. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data…☆1,390Updated this week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆1,977Updated last week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,827Updated this week
- Work with LLMs on a local environment using containers☆266Updated this week
- Effortlessly run LLM backends, APIs, frontends, and services with one command.☆2,172Updated this week
- An external provider for Llama Stack allowing for the use of RamaLama for inference.☆21Updated last week
- ☆612Updated this week
- Support for bootable OS containers (bootc) and generating disk images☆466Updated this week
- VS Code extension for LLM-assisted code/text completion☆1,082Updated 3 weeks ago
- Tool for interactive command line environments on Linux☆3,143Updated last month
- Taxonomy tree that will allow you to create models tuned with your data☆287Updated 3 months ago
- the terminal client for Ollama☆2,283Updated last month
- 📖 Runbooks that run☆2,141Updated last week
- CodeGate: Security, Workspaces and Multiplexing for AI Agentic Frameworks☆703Updated 6 months ago
- Delivery infrastructure for agents. Arch is a models-native proxy server that handles the plumbing work in AI: agent routing & orchestrat…☆4,487Updated this week
- Docker Model Runner☆285Updated this week
- Go manage your Ollama models☆1,598Updated last week
- Create microVMs from OCI images☆1,555Updated 2 months ago
- Run any Linux process in a secure, unprivileged sandbox using Landlock. Think firejail, but lightweight, user-friendly, and baked into th…☆1,987Updated 2 months ago
- Local AI API Platform☆2,763Updated 5 months ago
- AI-Powered, Non-Intrusive Terminal Assistant☆1,342Updated last week
- A powerful document AI question-answering tool that connects to your local Ollama models. Create, manage, and interact with RAG systems f…☆1,087Updated 4 months ago
- Lock, Stock, and Two Smoking MicroVMs. Create and manage the lifecycle of MicroVMs backed by containerd.☆1,195Updated last week
- A next-gen FOSS self-hosted unified zero trust secure access platform that can operate as a remote access VPN, a ZTNA platform, API/AI/MC…☆2,854Updated last week
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,101Updated last week
- Local CLI Copilot, powered by Ollama. 💻🦙☆1,460Updated last month
- Podman desktop companion☆1,576Updated 3 weeks ago
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,755Updated last month
- Examples for building and running LLM services and applications locally with Podman☆184Updated 4 months ago