Belullama is a comprehensive AI application that bundles Ollama, Open WebUI, and Automatic1111 (Stable Diffusion WebUI) into a single, easy-to-use package.
☆205May 30, 2025Updated 10 months ago
Alternatives and similar repositories for Belullama
Users that are interested in Belullama are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A sleek, customizable interface for managing LLMs with responsive design and easy agent personalization.☆17Aug 30, 2024Updated last year
- Chat with your pdf using your local LLM, OLLAMA client.☆42Oct 17, 2024Updated last year
- OllaDeck is a purple technology stack for Generative AI (text modality) cybersecurity. It provides a comprehensive set of tools for both …☆17Sep 21, 2024Updated last year
- LlamaCards is a web application that provides a dynamic interface for interacting with LLM models in real-time. This app allows users to …☆38Aug 28, 2024Updated last year
- Proteus is an experimental platform that combines the power of Large Language Models with the Genesis physics engine☆26Dec 20, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- RetroChat is a powerful command-line interface for interacting with various AI language models. It provides a seamless experience for eng…☆86Jul 13, 2025Updated 9 months ago
- Publish local LLMs and LLM apps on the internet.☆27Aug 17, 2025Updated 7 months ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆59Dec 1, 2024Updated last year
- Llama.cpp launcher with integrated huggingface☆47Mar 23, 2026Updated 3 weeks ago
- an auto-sleeping and -waking framework around llama.cpp☆12Feb 8, 2025Updated last year
- TUI for Ollama and other LLM providers☆433Mar 26, 2026Updated 3 weeks ago
- WebAISum is a Python script that allows you to summarize web pages using AI models. It supports both local models like Ollama and remote …☆15Apr 28, 2024Updated last year
- JacQues is a Dash-based interactive web application that facilitates real-time chat and document management.☆22Jan 5, 2026Updated 3 months ago
- LLm Collaboration☆12Aug 23, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- Personal AI Assistant with an interface.☆13Nov 9, 2024Updated last year
- Simple system tray application to monitor the status of your LLM models running on Ollama☆24Jun 24, 2025Updated 9 months ago
- Locally hosted AI Agent Python Tool To Generate Novel Research Hypothesis + Titles + Abstracts☆30Apr 30, 2025Updated 11 months ago
- Connect to a running instance of Ollama and use Mixtral in your Workers application☆19Dec 15, 2023Updated 2 years ago
- 🤖 UI for gpt-all-star: https://github.com/kyaukyuai/gpt-all-star☆28Updated this week
- Convert Files / Folders / GitHub Repos Into AI / LLM-ready Files☆163Jan 31, 2025Updated last year
- AI model Prompt Tester (AIPT for short) is a simple app that will check how suitable each model is for a given prompt.☆15Jul 7, 2024Updated last year
- "a towel is about the most massively useful thing an interstellar AI hitchhiker can have"☆48Oct 9, 2024Updated last year
- Experience the power of AI with this free AI voice generator demo. Utilizing Deepgram and Groq, we transform text into voice seamlessly. …☆37Jun 12, 2024Updated last year
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- The Fastest Way to Fine-Tune LLMs Locally☆339Dec 18, 2025Updated 3 months ago
- Opensource files for ELEGOO Jupiter 6k mono LCD 3D printer☆13Jun 9, 2023Updated 2 years ago
- A proxy that hosts multiple single-model runners such as LLama.cpp and vLLM☆13May 30, 2025Updated 10 months ago
- run ollama & gguf easily with a single command☆52May 15, 2024Updated last year
- ☆14Sep 20, 2024Updated last year
- CLI that uses DSPy to interact with MCP servers.☆24Mar 10, 2025Updated last year
- Efficient visual programming for AI language models☆361May 13, 2025Updated 11 months ago
- Instructions to run ollama using just docker-compose☆15Feb 11, 2025Updated last year
- User-friendly WebUI for LLMs☆26May 23, 2024Updated last year
- Serverless GPU API endpoints on Runpod - Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- Historical Language Model for London - A specialized LLM trained on 1500-1850 historical English text☆29Nov 1, 2025Updated 5 months ago
- Search the web and your self-hosted apps using local AI agents.☆577Nov 17, 2024Updated last year
- ☆26Sep 25, 2024Updated last year
- ☆135Apr 8, 2026Updated last week
- Local LLM set-up☆18Jul 1, 2024Updated last year
- OmniByteFormer is a generalized Transformer model that can process any type of data by converting it into byte sequences, bypassing tradi…☆15Updated this week
- A QT GUI for large language models☆40Dec 27, 2023Updated 2 years ago