minskylab / auto-rustLinks
auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing procedural macros.
☆44Updated last year
Alternatives and similar repositories for auto-rust
Users that are interested in auto-rust are comparing it to the libraries listed below
Sorting:
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆46Updated last year
- Andrej Karpathy's Let's build GPT: from scratch video & notebook implemented in Rust + candle☆76Updated last year
- A set of Rust macros for working with OpenAI function/tool calls.☆55Updated last year
- Library for doing RAG☆78Updated this week
- Rust Vector for large amounts of data, that does not copy when growing, by using full `mmap`'d pages.☆22Updated last year
- 🦀 A Pure Rust Framework For Building AGI (WIP).☆110Updated last week
- Your AI Copilot in Rust☆49Updated last year
- allms: One Rust Library to rule them aLLMs☆104Updated last week
- AI gateway and observability server written in Rust. Designed to help optimize multi-agent workflows.☆64Updated last year
- Anthropic Rust SDK 🦀 with async support.☆66Updated 9 months ago
- Friendly interface to chat with an Ollama instance.☆87Updated 2 months ago
- Low rank adaptation (LoRA) for Candle.☆168Updated 7 months ago
- GPT: Rust Assistant. Your go-to expert in the Rust ecosystem, specializing in precise code interpretation, up-to-date crate version check…☆18Updated 9 months ago
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.☆47Updated 9 months ago
- Bleeding edge low level Rust binding for GGML☆16Updated last year
- bott: Your Terminal Copilot☆87Updated last year
- A simplified example in Rust of training a neural network and then using it based on the Candle Framework by Hugging Face.☆39Updated 2 years ago
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆110Updated 2 years ago
- OpenAI Dive is an unofficial async Rust library that allows you to interact with the OpenAI API.☆75Updated 3 weeks ago
- ☆13Updated 2 years ago
- Deploy dioxus-web to Vercel.☆29Updated last year
- OpenAI compatible API for serving LLAMA-2 model☆218Updated 2 years ago
- A tool to extract images from pdf files☆61Updated last year
- Rust library for scheduling, managing resources, and running DAGs 🌙☆36Updated 10 months ago
- A Rust 🦀 port of the Hugging Face smolagents library.☆42Updated 8 months ago
- Structured outputs for LLMs☆52Updated last year
- Fast serverless LLM inference, in Rust.☆108Updated last month
- A whisper <lib|cli|server> written in rust☆19Updated 3 months ago
- Implementing the BitNet model in Rust☆42Updated last year