graniet / kheishLinks
Kheish: A multi-role LLM agent for tasks like code auditing, file searching, and more seamlessly leveraging RAG and extensible modules.
☆140Updated 6 months ago
Alternatives and similar repositories for kheish
Users that are interested in kheish are comparing it to the libraries listed below
Sorting:
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆203Updated 4 months ago
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly a…☆65Updated last month
- git-like rag pipeline☆233Updated this week
- Fast, streaming indexing, query, and agentic LLM applications in Rust☆506Updated this week
- AI Assistant☆20Updated 2 months ago
- Use multiple LLM backends in a single crate, simple builder-based configuration, and built-in prompt chaining & templating.☆132Updated last month
- ChronoMind: Redefining Vector Intelligence Through Time.☆71Updated last month
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆95Updated 3 months ago
- Library for doing RAG☆74Updated last month
- A Fish Speech implementation in Rust, with Candle.rs☆92Updated 3 weeks ago
- Build Secure and Compliant AI agents and MCP Servers. YC W23☆142Updated 3 weeks ago
- Rust implementation of Surya☆58Updated 4 months ago
- A memory framework for Large Language Models and Agents.☆182Updated 6 months ago
- The MCP enterprise actors-based server or mcp-ectors for short☆31Updated last month
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.☆46Updated 4 months ago
- a Rust library designed for building and managing generative AI agents, leveraging the capabilities of large language models (LLMs)☆20Updated 2 months ago
- A lightweight, high-performance text embedding model implemented in Rust.☆66Updated last month
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆37Updated last year
- A prompting library☆168Updated 9 months ago
- Burn through tech debt with AI agents!☆252Updated this week
- OpenAI compatible API for serving LLAMA-2 model☆218Updated last year
- ⚡ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), …☆359Updated last year
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆62Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆80Updated last year
- Split code into semantic chunks☆30Updated 9 months ago
- A minimal implementation of GraphRAG, designed to quickly prototype whether you're able to get good sense-making out of a large dataset w…☆31Updated 4 months ago
- The open-source RAG platform☆203Updated this week
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆56Updated 6 months ago
- A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.☆129Updated 2 weeks ago
- native OCR for MacOS, Windows, Linux☆177Updated this week