RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of containers.
☆2,807May 2, 2026Updated last week
Alternatives and similar repositories for ramalama
Users that are interested in ramalama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An external provider for Llama Stack allowing for the use of RamaLama for inference.☆21Dec 22, 2025Updated 4 months ago
- ☆26Jun 6, 2025Updated 11 months ago
- Boot and upgrade via container images☆2,020May 2, 2026Updated last week
- Examples for building and running LLM services and applications locally with Podman☆201Feb 13, 2026Updated 2 months ago
- A container for deploying bootable container images.☆442Apr 28, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Tool for interactive command line environments on Linux☆3,342Apr 11, 2026Updated 3 weeks ago
- Work with LLMs on a local environment using containers☆292Apr 29, 2026Updated last week
- InstructLab Core package. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data…☆1,415Mar 30, 2026Updated last month
- Distribute and run LLMs with a single file.☆24,349May 1, 2026Updated last week
- Models as a Service☆75Oct 21, 2025Updated 6 months ago
- an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM☆43,786Updated this week
- A small form factor OpenShift/Kubernetes optimized for edge computing☆822Updated this week
- Podman Desktop is the best free and open source tool to work with Containers and Kubernetes for developers. Get an intuitive and user-fri…☆7,601Updated this week
- Podman: A tool for managing OCI containers and pods.☆31,596Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆3,148Updated this week
- Collaborates with anacondas☆44Apr 5, 2026Updated last month
- Collection of demos for building Llama Stack based apps on OpenShift☆63Updated this week
- ODH Tools & Extensions Companion☆30Feb 8, 2026Updated 3 months ago
- This project makes running the InstructLab large language model (LLM) fine-tuning process easy and flexible on OpenShift☆27Aug 27, 2025Updated 8 months ago
- A tool that facilitates building OCI images.☆8,766Updated this week
- An OCI base image of Fedora CoreOS with batteries included☆612Apr 28, 2026Updated last week
- CentOS Stream-based base image☆13Feb 24, 2025Updated last year
- Open GenAI Stack☆8,364May 2, 2026Updated last week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- Create and maintain base bootable container images from Fedora ELN and CentOS Stream packages☆48Feb 24, 2026Updated 2 months ago
- LocalAI is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.☆46,040Updated this week
- ☆50Sep 21, 2025Updated 7 months ago
- Support for bootable OS containers (bootc) and generating disk images☆473Updated this week
- Microshift Management and Automation Collection☆13Sep 11, 2024Updated last year
- Resources, demos, recipes,... to work with LLMs on OpenShift with OpenShift AI or Open Data Hub.☆147Jan 7, 2026Updated 4 months ago
- Taxonomy tree that will allow you to create models tuned with your data☆296Sep 8, 2025Updated 8 months ago
- The next generation Linux workstation, designed for reliability, performance, and sustainability.☆2,465Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆78,979Updated this week
- AI Agents on DigitalOcean Gradient AI Platform • AdBuild production-ready AI agents using customizable tools or access multiple LLMs through a single endpoint. Create custom knowledge bases or connect external data.
- Run frontier AI locally.☆44,293May 1, 2026Updated last week
- Work with remote images registries - retrieving information, images, signing content☆10,812Updated this week
- User-friendly AI Interface (Supports Ollama, OpenAI API, ...)☆135,272May 1, 2026Updated last week
- Fast, flexible LLM inference☆7,103Apr 15, 2026Updated 3 weeks ago
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆3,772May 1, 2026Updated last week
- Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.☆170,820Updated this week
- LLM inference in C/C++☆107,892May 2, 2026Updated last week