suzuke / bitnet_rs
An unofficial implementation of BitNet
☆11Updated last year
Alternatives and similar repositories for bitnet_rs:
Users that are interested in bitnet_rs are comparing it to the libraries listed below
- Implementing the BitNet model in Rust☆31Updated 11 months ago
- An extension library to Candle that provides PyTorch functions not currently available in Candle☆38Updated last year
- A diffusers API in Burn (Rust)☆19Updated 8 months ago
- Graph model execution API for Candle☆13Updated 4 months ago
- 8-bit floating point types for Rust☆46Updated last week
- A Keras like abstraction layer on top of the Rust ML framework candle☆23Updated 9 months ago
- GGML bindings that aim to be idiomatic Rust rather than directly corresponding to the C/C++ interface☆19Updated last year
- Rust Vector for large amounts of data, that does not copy when growing, by using full `mmap`'d pages.☆22Updated last year
- Experimental compiler for deep learning models☆28Updated last month
- A Rust Library for High-Performance Tensor Exchange with Python☆40Updated 4 months ago
- Gaussian splatting in rust with wgpu☆19Updated last year
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆37Updated 4 months ago
- Tensor library for Zig☆11Updated 4 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆33Updated 10 months ago
- Bleeding edge low level Rust binding for GGML☆16Updated 8 months ago
- A Fish Speech implementation in Rust, with Candle.rs☆75Updated last month
- A prototype of creating a React Three Fiber like experience with Leptos and ThreeJS☆30Updated last year
- A neural network inference library, written in Rust.☆61Updated 8 months ago
- Generative AI web UI and server☆22Updated last year
- Build tools for LLMs in Rust using Model Context Protocol☆33Updated 3 weeks ago
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.☆42Updated last month
- Rust standalone inference of Namo-500M series models. Extremly tiny, runing VLM on CPU.☆22Updated last week
- ☆19Updated 5 months ago
- A collection of optimisers for use with candle☆34Updated 4 months ago
- Acoustic echo cancellation in Rust with speexdsp☆28Updated 3 months ago
- ESRGAN implemented in rust with candle☆15Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated last year
- GPU based FFT written in Rust and CubeCL☆20Updated last week
- An n-dimensional array library that uses wgpu to run compute shaders on all wgpu backends (and multiple at once)☆30Updated 4 years ago
- Run LLaMA inference on CPU, with Rust 🦀🚀🦙☆20Updated 2 months ago