edgenai / edgen
⚡ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), Speech-to-text (whisper) and many others.
☆328Updated 3 months ago
Related projects: ⓘ
- Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.☆260Updated last year
- A multi-platform desktop application to evaluate and compare LLM models, written in Rust and React.☆430Updated this week
- OpenAI compatible API for serving LLAMA-2 model☆212Updated 11 months ago
- 🦀 A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AI☆236Updated 6 months ago
- Hybrid vector database with flexible SQL storage engine & multi-index support.☆343Updated this week
- Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.☆373Updated 6 months ago
- A cross-platform browser ML framework.☆567Updated last week
- Open source alternative to Perplexity AI with ability to run locally☆130Updated 2 months ago
- A minimalist yet highly performant, lightweight, lightning fast, multisource, multimodal and local embedding solution, built in rust.☆220Updated this week
- Fast, streaming indexing and query library for AI (RAG) applications, written in Rust☆129Updated this week
- LLM Orchestrator built in Rust☆261Updated 6 months ago
- Instant, controllable, local pre-trained AI models in Rust☆1,281Updated this week
- ☆134Updated 7 months ago
- Library for generating vector embeddings, reranking in Rust☆250Updated 3 weeks ago
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆52Updated 11 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated 8 months ago
- Run any ML model from any programming language.☆421Updated 8 months ago
- From anywhere you can type, query and stream the output of an LLM or any other script☆442Updated 5 months ago
- Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from R…☆245Updated this week
- Neural search for web-sites, docs, articles - online!☆124Updated last year
- An Interface for Deterministic Signals from Probabilistic LLM Vibes☆60Updated last week
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆229Updated 3 weeks ago
- Tera is an AI assistant which is tailored just for you and runs fully locally.☆55Updated 6 months ago
- High-level, optionally asynchronous Rust bindings to llama.cpp☆161Updated 3 months ago
- 🤖 TUI interface for LLMs written in Rust☆303Updated 2 weeks ago
- High-performance key-value store for ML inference. 100x faster than Redis.☆204Updated 4 months ago
- Spotify/Annoy-inspired Approximate Nearest Neighbors in Rust, based on LMDB and optimized for memory usage☆204Updated this week
- A Rust library allowing to interact with the Ollama API.☆456Updated last week
- A tiny embedding database in pure Rust.☆365Updated 8 months ago
- Replace OpenAI with Llama.cpp Automagically.☆277Updated 3 months ago