overmindai / asimovLinks
Asimov helps you build high performance LLM apps, written in Rust π¦
β11Updated last year
Alternatives and similar repositories for asimov
Users that are interested in asimov are comparing it to the libraries listed below
Sorting:
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizingβ¦β46Updated last year
- allms: One Rust Library to rule them aLLMsβ107Updated this week
- A Rust π¦ port of the Hugging Face smolagents library.β42Updated 10 months ago
- Unofficial Rust bindings to Apple's mlx frameworkβ250Updated this week
- Anthropic Rust SDK π¦ with async support.β67Updated last month
- Repair incomplete JSON (e.g. from streaming APIs or AI models) so it can be parsed as it's received.β36Updated 2 years ago
- Build tools for LLMs in Rust using Model Context Protocolβ37Updated 11 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rustβ79Updated 2 years ago
- A vector database for querying meaningfully similar data.β16Updated 11 months ago
- Fast serverless LLM inference, in Rust.β110Updated 3 months ago
- Library to stream operating system events to AIβ40Updated 10 months ago
- Examples and use cases for building LLM-Powered apps with Rigβ95Updated last year
- A high-performance RAG indexing pipeline implemented in Rust using LanceDB and Candleβ26Updated last year
- Low rank adaptation (LoRA) for Candle.β169Updated 9 months ago
- Community Rust SDK for Deepgram.β62Updated last week
- β‘οΈLightning fast in-memory VectorDB written in rustπ¦β30Updated 11 months ago
- An openAI CLI built in rustβ10Updated 3 years ago
- Tera is an AI assistant which is tailored just for you and runs fully locally.β87Updated last year
- Neural search for web-sites, docs, articles - online!β146Updated 6 months ago
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!β111Updated 2 years ago
- OpenAI compatible API for serving LLAMA-2 modelβ218Updated 2 years ago
- A CLI in Rust to generate synthetic data for MLX friendly trainingβ25Updated 2 years ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Faceβ47Updated last year
- β50Updated last week
- InfinyOn Labs projectsβ14Updated last year
- Add local LLMs to your Web or Electron apps! Powered by Rust + WebGPUβ106Updated 2 years ago
- Model Context Protocol (MCP) CLI server template for Rustβ81Updated 9 months ago
- Speech detection using silero vad in Rustβ30Updated last year
- A diffusers API in Burn (Rust)β25Updated 2 weeks ago
- β44Updated last year