EricLBuehler / mistral.rsLinks
Fast, flexible LLM inference
☆6,508Updated this week
Alternatives and similar repositories for mistral.rs
Users that are interested in mistral.rs are comparing it to the libraries listed below
Sorting:
- Deep learning at the speed of light.☆2,766Updated last week
- Burn is a next generation tensor library and Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.☆14,290Updated this week
- Local AI API Platform☆2,762Updated 7 months ago
- Minimalist ML framework for Rust☆19,322Updated this week
- Instant, controllable, local pre-trained AI models in Rust☆2,135Updated this week
- [Unmaintained, see README] An ecosystem of Rust libraries for working with large language models☆6,146Updated last year
- Bionic is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality☆2,298Updated this week
- A vector search SQLite extension that runs anywhere!☆6,858Updated last year
- Distributed LLM and StableDiffusion inference for mobile, desktop and server.☆2,901Updated last year
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,588Updated last week
- AICI: Prompts as (Wasm) Programs☆2,061Updated last year
- A blazing fast inference solution for text embeddings models☆4,476Updated last week
- Open-source LLM load balancer and serving platform for self-hosting LLMs at scale 🏓🦙 Alternative to projects like llm-d, Docker Model R…☆1,447Updated this week
- Fast ML inference & training for ONNX models in Rust☆1,963Updated this week
- Cohere Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.☆3,154Updated this week
- Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.☆8,788Updated this week
- 🚂 🦀 The one-person framework for Rust for side-projects and startups☆8,613Updated last month
- 🦜️🔗LangChain for Rust, the easiest way to write LLM-based programs in Rust☆1,221Updated this week
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,703Updated last month
- A fast llama2 decoder in pure Rust.☆1,059Updated 2 years ago
- ⚙️🦀 Build modular and scalable LLM Applications in Rust☆5,896Updated this week
- ☆3,071Updated 2 months ago
- 🦀 A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AI☆526Updated last year
- Minimal LLM inference in Rust☆1,030Updated last year
- RAG (Retrieval Augmented Generation) Framework for building modular, open source applications for production by TrueFoundry☆4,314Updated 2 months ago
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,822Updated last week
- Training LLMs with QLoRA + FSDP☆1,539Updated last year
- lightweight, standalone C++ inference engine for Google's Gemma models.☆6,728Updated this week
- A simple and easy-to-use library for interacting with the Ollama API.☆982Updated 3 weeks ago
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,624Updated 5 months ago