ljt019 / transformersLinks
Transformers provides a simple, intuitive interface for Rust developers who want to work with Large Language Models locally, powered by the Candle crate. It offers an API inspired by Python's Transformers.
☆18Updated 2 months ago
Alternatives and similar repositories for transformers
Users that are interested in transformers are comparing it to the libraries listed below
Sorting:
- Fast, Lightweight, Unified Engine for Text2Image Diffusion Models☆19Updated 5 months ago
- A Fish Speech implementation in Rust, with Candle.rs☆98Updated 4 months ago
- A collection of optimisers for use with candle☆41Updated last month
- Low rank adaptation (LoRA) for Candle.☆162Updated 5 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆232Updated last week
- Use multiple LLM backends in a single crate, simple builder-based configuration, and built-in prompt chaining & templating.☆137Updated 4 months ago
- implement llava using candle☆15Updated last year
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆41Updated last year
- Rust standalone inference of Namo-500M series models. Extremly tiny, runing VLM on CPU.☆24Updated 6 months ago
- A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.☆167Updated 2 weeks ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆473Updated last week
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆107Updated 7 months ago
- Unofficial Rust bindings to Apple's mlx framework☆192Updated last week
- High-level, optionally asynchronous Rust bindings to llama.cpp☆230Updated last year
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes☆237Updated 2 months ago
- A comprehensive Rust translation of the code from Sebastian Raschka's Build an LLM from Scratch book.☆250Updated last week
- Rust bindings to https://github.com/k2-fsa/sherpa-onnx☆222Updated this week
- ☆369Updated this week
- Fast serverless LLM inference, in Rust.☆93Updated 7 months ago
- Modern, fast, document parser written in 🦀☆516Updated last month
- ONNX neural network inference engine☆242Updated last week
- Fast, streaming indexing, query, and agentic LLM applications in Rust☆583Updated this week
- Blazingly fast inference of diffusion models.☆115Updated 6 months ago
- ☆33Updated 10 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- Models and examples built with Burn☆286Updated 3 weeks ago
- CLI utility to inspect and explore .safetensors and .gguf files☆30Updated 2 months ago
- Automatically derive Python dunder methods for your Rust code☆20Updated 5 months ago
- Inference engine for GLiNER models, in Rust☆71Updated 3 months ago
- GPU based FFT written in Rust and CubeCL☆23Updated 3 months ago