intentee / paddlerLinks
Open-source LLM load balancer and serving platform for self-hosting LLMs at scale ππ¦
β1,360Updated 3 weeks ago
Alternatives and similar repositories for paddler
Users that are interested in paddler are comparing it to the libraries listed below
Sorting:
- Minimal LLM inference in Rustβ1,020Updated last year
- A high-performance inference engine for AI modelsβ1,363Updated last week
- Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.β385Updated last year
- A cross-platform browser ML framework.β719Updated 11 months ago
- WebAssembly binding for llama.cpp - Enabling on-browser LLM inferenceβ933Updated last month
- VS Code extension for LLM-assisted code/text completionβ1,043Updated last week
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.β514Updated this week
- Big & Small LLMs working togetherβ1,194Updated this week
- Super-fast Structured Outputsβ595Updated 3 weeks ago
- A multi-platform desktop application to evaluate and compare LLM models, written in Rust and React.β865Updated 6 months ago
- Scalable, fast, and disk-friendly vector search in Postgres, the successor of pgvecto.rs.β1,295Updated last week
- Felafax is building AI infra for non-NVIDIA GPUsβ568Updated 9 months ago
- A realtime serving engine for Data-Intensive Generative AI Applicationsβ1,062Updated last week
- Korvus is a search SDK that unifies the entire RAG pipeline in a single database query. Built on top of Postgres with bindings for Pythonβ¦β1,451Updated 9 months ago
- SeekStorm - sub-millisecond full-text search library & multi-tenancy server in Rustβ1,765Updated this week
- Fully neural approach for text chunkingβ392Updated 3 weeks ago
- Large-scale LLM inference engineβ1,583Updated this week
- Things you can do with the token embeddings of an LLMβ1,450Updated 3 weeks ago
- An application for running LLMs locally on your device, with your documents, facilitating detailed citations in generated responses.β620Updated last year
- A hub for various industry-specific schemas to be used with VLMs.β536Updated 5 months ago
- git-like rag pipelineβ247Updated last week
- Replace OpenAI with Llama.cpp Automagically.β325Updated last year
- Rust library for generating vector embeddings, reranking. Re-write of qdrant/fastembed.β653Updated last week
- Git Based Memory Storage for Conversational AI Agentβ679Updated 2 months ago
- βΎοΈ Helix is a private GenAI stack for building AI agents with declarative pipelines, knowledge (RAG), API bindings, and first-class testiβ¦β525Updated last week
- LLM-powered lossless compression toolβ289Updated last year
- Docs for GGUF quantization (unofficial)β308Updated 3 months ago
- Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from Rβ¦β516Updated last week
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM β¦β606Updated 8 months ago
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etcβ1,862Updated last week