containers / ramalamaLinks
RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of containers.
☆2,532Updated last week
Alternatives and similar repositories for ramalama
Users that are interested in ramalama are comparing it to the libraries listed below
Sorting:
- Boot and upgrade via container images☆1,816Updated last week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,209Updated last week
- Work with LLMs on a local environment using containers☆275Updated this week
- the terminal client for Ollama☆2,305Updated last month
- InstructLab Core package. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data…☆1,403Updated last week
- A container for deploying bootable container images.☆401Updated last week
- VS Code extension for LLM-assisted code/text completion☆1,135Updated last week
- Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https…☆2,042Updated this week
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,804Updated last week
- An external provider for Llama Stack allowing for the use of RamaLama for inference.☆20Updated last month
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙☆1,425Updated last week
- ☆809Updated last week
- Tool for interactive command line environments on Linux☆3,205Updated this week
- Support for bootable OS containers (bootc) and generating disk images☆466Updated this week
- Effortlessly run LLM backends, APIs, frontends, and services with one command.☆2,333Updated this week
- Generate Podman Quadlet files from a Podman command, compose file, or existing object☆1,273Updated last week
- CodeGate: Security, Workspaces and Multiplexing for AI Agentic Frameworks☆710Updated 7 months ago
- Go manage your Ollama models☆1,650Updated 3 weeks ago
- Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Manageme…☆2,168Updated this week
- A minimal LLM chat app that runs entirely in your browser☆1,062Updated 3 months ago
- Examples for building and running LLM services and applications locally with Podman☆190Updated 5 months ago
- Proxy that allows you to use ollama as a copilot like Github copilot☆812Updated this week
- Local AI API Platform☆2,761Updated 6 months ago
- Podman desktop companion☆1,581Updated this week
- Docker Model Runner☆375Updated this week
- High Performace IDE for Jupyter Notebooks☆2,283Updated last month
- Powerful system container and virtual machine manager☆4,737Updated this week
- Infrastructure to build the Granite.Code vscode extension☆30Updated 2 months ago
- opensource self-hosted sandboxes for ai agents☆4,479Updated 2 weeks ago
- The next generation Linux workstation, designed for reliability, performance, and sustainability.☆2,261Updated this week