containers / ramalamaLinks
RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for inference in production, all through the familiar language of containers.
☆2,296Updated this week
Alternatives and similar repositories for ramalama
Users that are interested in ramalama are comparing it to the libraries listed below
Sorting:
- Boot and upgrade via container images☆1,656Updated this week
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,862Updated last week
- InstructLab Core package. Use this to chat with a model and execute the InstructLab workflow to train a model using custom taxonomy data…☆1,379Updated this week
- the terminal client for Ollama☆2,240Updated last month
- VS Code extension for LLM-assisted code/text completion☆1,043Updated last week
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙☆1,360Updated 3 weeks ago
- Generate Podman Quadlet files from a Podman command, compose file, or existing object☆1,150Updated last year
- An external provider for Llama Stack allowing for the use of RamaLama for inference.☆21Updated 2 weeks ago
- Work with LLMs on a local environment using containers☆262Updated this week
- A container for deploying bootable container images.☆380Updated this week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,590Updated last week
- Support for bootable OS containers (bootc) and generating disk images☆464Updated last week
- ☆484Updated last week
- The most no-nonsense, locally or API-hosted AI code completion plugin for Visual Studio Code - like GitHub Copilot but 100% free.☆3,615Updated 3 months ago
- Go manage your Ollama models☆1,546Updated last month
- Artificial Neural Engine Machine Learning Library☆1,235Updated 2 months ago
- Proxy that allows you to use ollama as a copilot like Github copilot☆788Updated 2 months ago
- Local CLI Copilot, powered by Ollama. 💻🦙☆1,458Updated 2 weeks ago
- Effortlessly run LLM backends, APIs, frontends, and services with one command.☆2,136Updated 2 weeks ago
- Examples for building and running LLM services and applications locally with Podman☆183Updated 3 months ago
- Granite Code Models: A Family of Open Foundation Models for Code Intelligence☆1,239Updated 4 months ago
- AI Inference Operator for Kubernetes. The easiest way to serve ML models in production. Supports VLMs, LLMs, embeddings, and speech-to-te…☆1,090Updated this week
- Open source platform for AI Engineering: OpenTelemetry-native LLM Observability, GPU Monitoring, Guardrails, Evaluations, Prompt Manageme…☆2,017Updated this week
- Taxonomy tree that will allow you to create models tuned with your data☆286Updated 2 months ago
- A minimal LLM chat app that runs entirely in your browser☆1,028Updated last month
- A powerful document AI question-answering tool that connects to your local Ollama models. Create, manage, and interact with RAG systems f…☆1,082Updated 3 months ago
- The next generation Linux workstation, designed for reliability, performance, and sustainability.☆2,095Updated this week
- Helm chart for Ollama on Kubernetes☆518Updated last week
- Tool for interactive command line environments on Linux☆3,094Updated 3 weeks ago
- An open-source alternative to GitHub copilot that runs locally.☆983Updated last year