mlc-ai / packageLinks
☆13Updated this week
Alternatives and similar repositories for package
Users that are interested in package are comparing it to the libraries listed below
Sorting:
- RWKV models and examples powered by candle.☆19Updated 6 months ago
- Implementation of the RWKV language model in pure WebGPU/Rust.☆314Updated 2 weeks ago
- AMD related optimizations for transformer models☆83Updated 2 weeks ago
- Python bindings for ggml☆146Updated last year
- llama.cpp to PyTorch Converter☆34Updated last year
- xllamacpp - a Python wrapper of llama.cpp☆52Updated last week
- ☆163Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆153Updated this week
- Thin wrapper around GGML to make life easier☆40Updated 2 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆89Updated this week
- Use safetensors with ONNX 🤗☆69Updated 2 months ago
- CPM.cu is a lightweight, high-performance CUDA implementation for LLMs, optimized for end-device inference and featuring cutting-edge tec…☆175Updated last week
- Bamboo-7B Large Language Model☆93Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated 10 months ago
- ☆120Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 7 months ago
- Inference of Mamba models in pure C☆191Updated last year
- Comparison of Language Model Inference Engines☆229Updated 8 months ago
- High-speed and easy-use LLM serving framework for local deployment☆117Updated 3 weeks ago
- Fused Qwen3 MoE layer for faster training, compatible with HF Transformers, LoRA, 4-bit quant, Unsloth☆168Updated last week
- Gpu benchmark☆67Updated 7 months ago
- Samples of good AI generated CUDA kernels☆89Updated 3 months ago
- Simple high-throughput inference library☆127Updated 3 months ago
- Fast and memory-efficient exact attention☆184Updated this week
- A fast RWKV Tokenizer written in Rust☆52Updated 3 weeks ago
- Implementation of nougat that focuses on processing pdf locally.☆81Updated 7 months ago
- QuIP quantization☆58Updated last year
- LLM inference in C/C++☆101Updated last week
- A memory efficient DLRM training solution using ColossalAI☆106Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆94Updated this week