mlc-ai / tokenizers-cppLinks
Universal cross-platform tokenizers binding to HF and sentencepiece
☆388Updated last month
Alternatives and similar repositories for tokenizers-cpp
Users that are interested in tokenizers-cpp are comparing it to the libraries listed below
Sorting:
- LLaMa/RWKV onnx models, quantization and testcase☆365Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- ☆68Updated 2 years ago
- ☆197Updated 4 months ago
- ☆141Updated last year
- GPTQ inference Triton kernel☆307Updated 2 years ago
- ☆128Updated 8 months ago
- Common utilities for ONNX converters☆279Updated last week
- export llama to onnx☆134Updated 8 months ago
- Common source, scripts and utilities for creating Triton backends.☆347Updated last week
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆177Updated 5 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆895Updated last year
- ☆412Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆40Updated 6 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆411Updated this week
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆264Updated last month
- ☆125Updated last year
- A quantization algorithm for LLM☆141Updated last year
- ☆59Updated 9 months ago
- Running BERT without Padding☆475Updated 3 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆667Updated last month
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 2 months ago
- The Triton backend for the ONNX Runtime.☆161Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆294Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆753Updated 6 months ago
- The Triton TensorRT-LLM Backend☆887Updated this week
- ☆164Updated this week
- llm-export can export llm model to onnx.☆306Updated last week
- A collection of memory efficient attention operators implemented in the Triton language.☆277Updated last year