oxideai / diffusers-burnLinks
A diffusers API in Burn (Rust)
☆21Updated last year
Alternatives and similar repositories for diffusers-burn
Users that are interested in diffusers-burn are comparing it to the libraries listed below
Sorting:
- Low rank adaptation (LoRA) for Candle.☆158Updated 4 months ago
- LLaMa 7b with CUDA acceleration implemented in rust. Minimal GPU memory needed!☆109Updated 2 years ago
- Democratizing large model inference and training on any device.☆144Updated this week
- 8-bit floating point types for Rust☆59Updated last month
- A minimal OpenCL, CUDA, Vulkan and host CPU array manipulation engine / framework.☆75Updated 3 weeks ago
- Blazingly fast inference of diffusion models.☆114Updated 5 months ago
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆40Updated last year
- Experimental compiler for deep learning models☆66Updated 3 months ago
- A curated collection of Rust projects related to neural networks, designed to complement "Are We Learning Yet."☆57Updated 3 months ago
- Rust library for scheduling, managing resources, and running DAGs 🌙☆34Updated 7 months ago
- ☆20Updated 11 months ago
- Implementing the BitNet model in Rust☆39Updated last year
- GPU based FFT written in Rust and CubeCL☆23Updated 3 months ago
- A neural network inference library, written in Rust.☆65Updated last year
- A collection of optimisers for use with candle☆40Updated last month
- Stable Diffusion v1.4 ported to Rust's burn framework☆339Updated 11 months ago
- Bleeding edge low level Rust binding for GGML☆16Updated last year
- ☆92Updated 8 months ago
- Almost-Pure Rust TTS Engine for my Rustnation talk☆44Updated 8 months ago
- ☆58Updated 2 years ago
- ESRGAN implemented in rust with candle☆17Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- Gaussian splatting in rust with wgpu☆20Updated last year
- Rustic bindings for IREE☆18Updated 2 years ago
- image segmentation on video and images☆48Updated last year
- High-level, optionally asynchronous Rust bindings to llama.cpp☆228Updated last year
- Fast convolutions library implemented completely in Rust. Minimal depedencies required, and especially no external C libraries.☆26Updated 2 years ago
- Automatic differentiation in Rust with WGPU support☆23Updated 3 years ago
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.☆47Updated 6 months ago
- A Deep Learning and preprocessing framework in Rust with support for CPU and GPU.☆132Updated last year