michaelfeil / candle-flash-attn-v3Links
β13Updated 9 months ago
Alternatives and similar repositories for candle-flash-attn-v3
Users that are interested in candle-flash-attn-v3 are comparing it to the libraries listed below
Sorting:
- implement llava using candleβ15Updated last year
- π· Build compute kernelsβ171Updated this week
- Rust crate for some audio utilitiesβ25Updated 8 months ago
- Simple high-throughput inference libraryβ149Updated 6 months ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedbackβ111Updated 8 months ago
- β21Updated 8 months ago
- vLLM adapter for a TGIS-compatible gRPC server.β44Updated this week
- A high-performance constrained decoding engine based on context free grammar in Rustβ55Updated 5 months ago
- GPU based FFT written in Rust and CubeCLβ24Updated 5 months ago
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datasβ¦β214Updated last month
- Read and write tensorboard data using Rustβ23Updated last year
- A collection of optimisers for use with candleβ43Updated 3 months ago
- Proof of concept for running moshi/hibiki using webrtcβ19Updated 8 months ago
- β12Updated last year
- PTX-Tutorial Written Purely By AIs (Deep Research of Openai and Claude 3.7)β66Updated 7 months ago
- Inference Llama 2 in one file of zero-dependency, zero-unsafe Rustβ39Updated 2 years ago
- β135Updated last year
- Cray-LM unified training and inference stack.β22Updated 9 months ago
- Rust Implementation of microgradβ53Updated last year
- CLI utility to inspect and explore .safetensors and .gguf filesβ34Updated 2 weeks ago
- Make triton easierβ48Updated last year
- Collection of autoregressive model implementationβ86Updated 6 months ago
- A collection of reproducible inference engine benchmarksβ37Updated 6 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inferenceβ62Updated last month
- PCCL (Prime Collective Communications Library) implements fault tolerant collective communications over IPβ138Updated 2 months ago
- A Python wrapper around HuggingFace's TGI (text-generation-inference) and TEI (text-embedding-inference) servers.β33Updated last month
- Efficient non-uniform quantization with GPTQ for GGUFβ53Updated last month
- β36Updated 11 months ago
- β19Updated last year
- Inference engine for GLiNER models, in Rustβ76Updated last week