chenwanqq / candle-llavaLinks
implement llava using candle
☆15Updated last year
Alternatives and similar repositories for candle-llava
Users that are interested in candle-llava are comparing it to the libraries listed below
Sorting:
- A collection of optimisers for use with candle☆39Updated last week
- ☆130Updated last year
- ☆12Updated 6 months ago
- A high-performance constrained decoding engine based on context free grammar in Rust☆55Updated 3 months ago
- CLI utility to inspect and explore .safetensors and .gguf files☆26Updated 3 weeks ago
- Rust crate for some audio utilities☆26Updated 5 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆38Updated 2 years ago
- Proof of concept for running moshi/hibiki using webrtc☆20Updated 5 months ago
- 👷 Build compute kernels☆106Updated last week
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆101Updated 5 months ago
- ☆12Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆80Updated last year
- Low rank adaptation (LoRA) for Candle.☆153Updated 4 months ago
- Inference engine for GLiNER models, in Rust☆64Updated last month
- ☆20Updated 10 months ago
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆198Updated last month
- Automatically derive Python dunder methods for your Rust code☆19Updated 4 months ago
- ☆33Updated 9 months ago
- Transformers provides a simple, intuitive interface for Rust developers who want to work with Large Language Models locally, powered by t…☆18Updated last month
- A Fish Speech implementation in Rust, with Candle.rs☆94Updated 2 months ago
- ☆26Updated 8 months ago
- Simple high-throughput inference library☆126Updated 3 months ago
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆420Updated last week
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package☆219Updated 2 months ago
- Fast serverless LLM inference, in Rust.☆88Updated 5 months ago
- GPU based FFT written in Rust and CubeCL☆23Updated 2 months ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆58Updated 3 months ago
- ☆138Updated last year
- Faster structured generation☆242Updated 3 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆37Updated last year