aidatatools / ollama-benchmarkLinks
LLM Benchmark for Throughput via Ollama (Local LLMs)
☆280Updated 2 weeks ago
Alternatives and similar repositories for ollama-benchmark
Users that are interested in ollama-benchmark are comparing it to the libraries listed below
Sorting:
- A proxy server for multiple ollama instances with Key security☆483Updated 3 weeks ago
- Handy tool to measure the performance and efficiency of LLMs workloads.☆71Updated 4 months ago
- Code execution utilities for Open WebUI & Ollama☆297Updated 9 months ago
- VSCode AI coding assistant powered by self-hosted llama.cpp endpoint.☆183Updated 7 months ago
- Download models from the Ollama library, without Ollama☆93Updated 9 months ago
- Benchmark llm performance☆104Updated last year
- A simple to use Ollama autocompletion engine with options exposed and streaming functionality☆135Updated 4 months ago
- Dagger functions to import Hugging Face GGUF models into a local ollama instance and optionally push them to ollama.com.☆116Updated last year
- Link you Ollama models to LM-Studio☆141Updated last year
- QA-Pilot is an interactive chat project that leverages online/local LLM for rapid understanding and navigation of GitHub code repository.☆299Updated this week
- Nginx proxy server in a Docker container to Authenticate & Proxy requests to Ollama from Public Internet via Cloudflare Tunnel☆133Updated 2 months ago
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆150Updated 3 months ago
- Open‑WebUI Tools is a modular toolkit designed to extend and enrich your Open WebUI instance, turning it into a powerful AI workstation. …☆326Updated last week
- 🚀 Retrieval Augmented Generation (RAG) with txtai. Combine search and LLMs to find insights with your own data.☆398Updated 3 months ago
- This repository contains custom pipelines developed for the OpenWebUI framework, including advanced workflows such as long-term memory fi…☆72Updated 3 months ago
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆81Updated this week
- OpenAPI Tool Servers☆629Updated 2 months ago
- 🏗️ Fine-tune, build, and deploy open-source LLMs easily!☆467Updated this week
- beep boop 🤖 (experimental)☆113Updated 7 months ago
- InferX is a Inference Function as a Service Platform☆129Updated last week
- Create Linux commands from natural language, in the shell.☆113Updated last week
- Lightweight Inference server for OpenVINO☆202Updated this week
- ☆95Updated last week
- LLMX; Easiest 3rd party Local LLM UI for the web!☆267Updated this week
- Educational framework exploring ergonomic, lightweight multi-agent orchestration. Modified to use local Ollama endpoint☆50Updated 10 months ago
- ☆102Updated 3 months ago
- ☆209Updated last month
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆162Updated last year
- API up your Ollama Server.☆173Updated 2 months ago
- Benchmark your local LLMs.☆51Updated last year