huggingface / candle-cublasltLinks
☆13Updated last year
Alternatives and similar repositories for candle-cublaslt
Users that are interested in candle-cublaslt are comparing it to the libraries listed below
Sorting:
- GPU based FFT written in Rust and CubeCL☆23Updated last month
- implement llava using candle☆15Updated last year
- Unleash the full potential of exascale LLMs on consumer-class GPUs, proven by extensive benchmarks, with no long-term adjustments and min…☆26Updated 8 months ago
- A collection of optimisers for use with candle☆37Updated last week
- Rust Implementation of micrograd☆52Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆80Updated last year
- A high-performance constrained decoding engine based on context free grammar in Rust☆54Updated 2 months ago
- Dataflow is a data processing library, primarily for machine learning.☆24Updated 2 years ago
- Rust crate for some audio utilities☆26Updated 4 months ago
- 👷 Build compute kernels☆87Updated this week
- ☆137Updated last year
- Implementing the BitNet model in Rust☆38Updated last year
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rust☆38Updated 2 years ago
- A complete(grpc service and lib) Rust inference with multilingual embedding support. This version leverages the power of Rust for both GR…☆39Updated 11 months ago
- Rust SDK and CLI for Swarm Framework with Multi-Agent Orchestration☆15Updated 3 months ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆100Updated 4 months ago
- xet client tech, used in huggingface_hub☆148Updated this week
- A collection of serverless apps that show how Fermyon's Serverless AI (currently in private beta) works. Reference: https://developer.fer…☆50Updated 7 months ago
- Proof of concept for a generative AI application framework powered by WebAssembly and Extism☆14Updated last year
- This library supports evaluating disparities in generated image quality, diversity, and consistency between geographic regions.☆20Updated last year
- First token cutoff sampling inference example☆30Updated last year
- Fast and versatile tokenizer for language models, compatible with SentencePiece, Tokenizers, Tiktoken and more. Supports BPE, Unigram and…☆28Updated 4 months ago
- Tantivy directory implementation backed by object_store☆36Updated last year
- Generate Glue Code in seconds to simplify your Nvidia Triton Inference Server Deployments☆20Updated last year
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆37Updated last year
- Fast serverless LLM inference, in Rust.☆88Updated 5 months ago
- ☆130Updated last year
- Read and write tensorboard data using Rust☆21Updated last year
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆57Updated 3 months ago
- "PyTorch in Rust"☆16Updated last year