RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of containers.
☆2,702Apr 10, 2026Updated this week
Alternatives and similar repositories for ramalama
Users that are interested in ramalama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An external provider for Llama Stack allowing for the use of RamaLama for inference.☆21Dec 22, 2025Updated 3 months ago
- ☆27Jun 6, 2025Updated 10 months ago
- Boot and upgrade via container images☆1,983Updated this week
- Examples for building and running LLM services and applications locally with Podman☆200Feb 13, 2026Updated 2 months ago
- A container for deploying bootable container images.☆440Apr 2, 2026Updated 2 weeks ago
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- Tool for interactive command line environments on Linux☆3,311Updated this week
- Work with LLMs on a local environment using containers☆291Updated this week
- Models as a Service☆75Oct 21, 2025Updated 5 months ago
- InstructLab Core package. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data…☆1,415Mar 30, 2026Updated 2 weeks ago
- Distribute and run LLMs with a single file.☆24,121Updated this week
- an open source, extensible AI agent that goes beyond code suggestions - install, execute, edit, and test with any LLM☆42,084Updated this week
- A small form factor OpenShift/Kubernetes optimized for edge computing☆821Apr 10, 2026Updated last week
- Podman Desktop is the best free and open source tool to work with Containers and Kubernetes for developers. Get an intuitive and user-fri…☆7,532Updated this week
- Podman: A tool for managing OCI containers and pods.☆31,414Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- Achieve state of the art inference performance with modern accelerators on Kubernetes☆2,957Updated this week
- Collaborates with anacondas☆40Apr 5, 2026Updated last week
- Collection of demos for building Llama Stack based apps on OpenShift☆63Apr 9, 2026Updated last week
- This project makes running the InstructLab large language model (LLM) fine-tuning process easy and flexible on OpenShift☆27Aug 27, 2025Updated 7 months ago
- ODH Tools & Extensions Companion☆30Feb 8, 2026Updated 2 months ago
- An OCI base image of Fedora CoreOS with batteries included☆601Updated this week
- CentOS Stream-based base image☆13Feb 24, 2025Updated last year
- A tool that facilitates building OCI images.☆8,732Updated this week
- ☆48Sep 21, 2025Updated 6 months ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Support for bootable OS containers (bootc) and generating disk images☆472Updated this week
- Composable building blocks to build LLM Apps☆8,324Updated this week
- Create and maintain base bootable container images from Fedora ELN and CentOS Stream packages☆47Feb 24, 2026Updated last month
- Resources, demos, recipes,... to work with LLMs on OpenShift with OpenShift AI or Open Data Hub.☆147Jan 7, 2026Updated 3 months ago
- LocalAI is the open-source AI engine. Run any model - LLMs, vision, voice, image, video - on any hardware. No GPU required.☆45,386Updated this week
- Microshift Management and Automation Collection☆13Sep 11, 2024Updated last year
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆3,212Updated this week
- Taxonomy tree that will allow you to create models tuned with your data☆294Sep 8, 2025Updated 7 months ago
- The next generation Linux workstation, designed for reliability, performance, and sustainability.☆2,431Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- A high-throughput and memory-efficient inference and serving engine for LLMs☆76,536Updated this week
- Run frontier AI locally.☆43,503Updated this week
- Work with remote images registries - retrieving information, images, signing content☆10,721Updated this week
- Fast, flexible LLM inference☆6,994Updated this week
- User-friendly AI Interface (Supports Ollama, OpenAI API, ...)☆131,509Updated this week
- Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.☆169,151Updated this week
- LLM inference in C/C++☆103,237Updated this week