chenwanqq / candle-llava
implement llava using candle
☆14Updated 10 months ago
Alternatives and similar repositories for candle-llava:
Users that are interested in candle-llava are comparing it to the libraries listed below
- ☆12Updated last year
- A collection of optimisers for use with candle☆34Updated 5 months ago
- ☆19Updated 6 months ago
- ☆28Updated 5 months ago
- A high-performance constrained decoding engine based on context free grammar in Rust☆48Updated 3 months ago
- ☆11Updated 2 months ago
- GPU based FFT written in Rust and CubeCL☆21Updated last month
- Low rank adaptation (LoRA) for Candle.☆142Updated 7 months ago
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆38Updated last year
- Rust crate for some audio utilities☆22Updated last month
- Read and write tensorboard data using Rust☆20Updated last year
- ☆23Updated this week
- 👷 Build compute kernels☆32Updated this week
- Implementing the BitNet model in Rust☆31Updated last year
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆38Updated last year
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆35Updated 11 months ago
- Experimental compiler for deep learning models☆35Updated this week
- ☆126Updated 11 months ago
- Proof of concept for running moshi/hibiki using webrtc☆18Updated last month
- A Keras like abstraction layer on top of the Rust ML framework candle☆23Updated 10 months ago
- ☆26Updated 4 months ago
- ☆13Updated last year
- 8-bit floating point types for Rust☆46Updated last month
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- Tensor library for Zig☆11Updated 5 months ago
- A Fish Speech implementation in Rust, with Candle.rs☆77Updated last month
- Fast, Lightweight, Unified Engine for Text2Image Diffusion Models☆20Updated this week
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆79Updated last month
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆104Updated last year
- Fast serverless LLM inference, in Rust.☆67Updated last month