jonathandale / chat-ollamaLinks
UI for Ollama
☆14Updated 3 months ago
Alternatives and similar repositories for chat-ollama
Users that are interested in chat-ollama are comparing it to the libraries listed below
Sorting:
- PyGPTPrompt: A CLI tool that manages context windows for AI models, facilitating user interaction and data ingestion for optimized long-t…☆30Updated last year
- Complex RAG backend☆29Updated last year
- Terminal Voice Assistant is a powerful and flexible tool designed to help users interact with their terminal using natural language comma…☆19Updated last year
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆101Updated 3 months ago
- For inferring and serving local LLMs using the MLX framework☆108Updated last year
- ☆38Updated last year
- ☆30Updated last year
- Experiments with open source LLMs☆74Updated 2 months ago
- Gradio based tool to run opensource LLM models directly from Huggingface☆96Updated last year
- Distributed Inference for mlx LLm☆99Updated last year
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated last year
- Easily create LLM automation/agent workflows☆60Updated last year
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆119Updated last year
- Task driven LLM multi agent framework that gives you the building blocks to create anything you wish☆80Updated last week
- A framework for hosting and scaling AI agents.☆38Updated last year
- ☆134Updated 7 months ago
- A high performance batching router optimises max throughput for text inference workload☆16Updated 2 years ago
- Client-side toolkit for using large language models, including where self-hosted☆113Updated last week
- Capture, tag, and search images locally with OSS models.☆44Updated 10 months ago
- This is the Mixture-of-Agents (MoA) concept, adapted from the original work by TogetherAI. My version is tailored for local model usage a…☆118Updated last year
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆99Updated 5 months ago
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- This project is a reverse-engineered version of Figma's tone changer. It uses Groq's Llama-3-8b for high-speed inference and to adjust th…☆90Updated last year
- Very basic framework for composable parameterized large language model (Q)LoRA / (Q)Dora fine-tuning using mlx, mlx_lm, and OgbujiPT.☆43Updated 5 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- A simple experiment on letting two local LLM have a conversation about anything!☆112Updated last year
- ☆16Updated last year
- Ollama function calling demo☆29Updated last year
- This code implements a Local LLM Selector from the list of Local Installed Ollama LLMs for your specific user Query☆103Updated 2 years ago
- Generate train.jsonl and valid.jsonl files to use for fine-tuning Mistral and other LLMs.☆96Updated last year