EricLBuehler / mistral.rsLinks
Blazingly fast LLM inference.
☆6,088Updated last week
Alternatives and similar repositories for mistral.rs
Users that are interested in mistral.rs are comparing it to the libraries listed below
Sorting:
- Deep learning at the speed of light.☆2,495Updated this week
- A vector search SQLite extension that runs anywhere!☆6,116Updated 7 months ago
- [Unmaintained, see README] An ecosystem of Rust libraries for working with large language models☆6,127Updated last year
- Local AI API Platform☆2,761Updated 2 months ago
- A blazing fast inference solution for text embeddings models☆4,014Updated this week
- Minimalist ML framework for Rust☆18,055Updated last week
- Distributed LLM and StableDiffusion inference for mobile, desktop and server.☆2,881Updated 10 months ago
- AICI: Prompts as (Wasm) Programs☆2,049Updated 7 months ago
- ☆3,016Updated last year
- Developer-friendly, embedded retrieval engine for multimodal AI. Search More; Manage Less.☆7,549Updated this week
- Instant, controllable, local pre-trained AI models in Rust☆2,010Updated this week
- Burn is a next generation Deep Learning Framework that doesn't compromise on flexibility, efficiency and portability.☆12,845Updated this week
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆1,498Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,417Updated 3 months ago
- Bionic is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality☆2,242Updated last month
- Open-source LLMOps platform for hosting and scaling AI in your own infrastructure 🏓🦙☆1,302Updated last week
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,605Updated last week
- A fast llama2 decoder in pure Rust.☆1,056Updated last year
- PyTorch native post-training library☆5,484Updated this week
- LSP-AI is an open-source language server that serves as a backend for AI-powered functionality, designed to assist and empower software e…☆2,985Updated 8 months ago
- Any model. Any hardware. Zero compromise. Built with @ziglang / @openxla / MLIR / @bazelbuild☆2,557Updated this week
- Cohere Toolkit is a collection of prebuilt components enabling users to quickly build and deploy RAG applications.☆3,110Updated this week
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,309Updated last month
- Minimal LLM inference in Rust☆1,014Updated 10 months ago
- Training LLMs with QLoRA + FSDP☆1,527Updated 10 months ago
- Fast, Accurate, Lightweight Python library to make State of the Art Embedding☆2,371Updated 2 weeks ago
- Large Language Model Text Generation Inference☆10,491Updated last week
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,662Updated 4 months ago
- Composable building blocks to build Llama Apps☆8,064Updated this week
- Run PyTorch LLMs locally on servers, desktop and mobile☆3,609Updated last week