MistApproach / callmLinks
Run Generative AI models directly on your hardware
☆36Updated last year
Alternatives and similar repositories for callm
Users that are interested in callm are comparing it to the libraries listed below
Sorting:
- Llama2 LLM ported to Rust burn☆280Updated last year
- Models and examples built with Burn☆282Updated last week
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆233Updated last month
- Low rank adaptation (LoRA) for Candle.☆158Updated 4 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆221Updated 2 months ago
- Rust client for Qdrant vector search engine☆326Updated last week
- High-level, optionally asynchronous Rust bindings to llama.cpp☆228Updated last year
- LLM Orchestrator built in Rust☆282Updated last year
- ONNX neural network inference engine☆233Updated this week
- An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Was …☆136Updated 11 months ago
- Andrej Karpathy's Let's build GPT: from scratch video & notebook implemented in Rust + candle☆75Updated last year
- Rust SDK for the Model Context Protocol (MCP)☆133Updated 2 months ago
- Use multiple LLM backends in a single crate, simple builder-based configuration, and built-in prompt chaining & templating.☆135Updated 3 months ago
- Unofficial Rust bindings to Apple's mlx framework☆189Updated last week
- A powerful Rust library and CLI tool to unify and orchestrate multiple LLM, Agent and voice backends (OpenAI, Claude, Gemini, Ollama, Ele…☆207Updated this week
- Fast, streaming indexing, query, and agentic LLM applications in Rust☆552Updated this week
- pgvector support for Rust☆178Updated last week
- ☆356Updated last week
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆453Updated this week
- Rust library for generating vector embeddings, reranking. Re-write of qdrant/fastembed.☆593Updated last week
- Democratizing large model inference and training on any device.☆144Updated this week
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)☆313Updated last year
- Library for doing RAG☆75Updated last month
- Implementation of the Coursera ML course in Rust☆45Updated 7 months ago
- Example of tch-rs on M1☆54Updated last year
- A set of Rust macros for working with OpenAI function/tool calls.☆53Updated last year
- OpenAI Dive is an unofficial async Rust library that allows you to interact with the OpenAI API.☆70Updated this week
- Inference Llama 2 in one file of pure Rust 🦀☆233Updated 2 years ago
- A Rust implementation of OpenAI's Whisper model using the burn framework☆322Updated last year
- Safe, portable, high performance compute (GPGPU) kernels.☆237Updated last month