mlc-ai / tokenizers-cppLinks
Universal cross-platform tokenizers binding to HF and sentencepiece
☆426Updated 3 months ago
Alternatives and similar repositories for tokenizers-cpp
Users that are interested in tokenizers-cpp are comparing it to the libraries listed below
Sorting:
- LLaMa/RWKV onnx models, quantization and testcase☆368Updated 2 years ago
- ☆70Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Updated last year
- ☆205Updated 7 months ago
- Common source, scripts and utilities for creating Triton backends.☆361Updated 3 weeks ago
- ☆140Updated last year
- export llama to onnx☆137Updated 11 months ago
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- llm-export can export llm model to onnx.☆333Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆958Updated last year
- GPTQ inference Triton kernel☆316Updated 2 years ago
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆183Updated 8 months ago
- Common utilities for ONNX converters☆287Updated 3 months ago
- ☆125Updated last year
- ☆170Updated 3 weeks ago
- ☆130Updated 11 months ago
- onnxruntime-extensions: A specialized pre- and post- processing library for ONNX Runtime☆430Updated this week
- The Triton backend for the ONNX Runtime.☆168Updated this week
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆43Updated 9 months ago
- Easy and Efficient Quantization for Transformers☆203Updated 5 months ago
- ☆317Updated last week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆272Updated 4 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆267Updated 4 months ago
- ONNX Optimizer☆780Updated last month
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆148Updated 3 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆720Updated 4 months ago
- A quantization algorithm for LLM☆146Updated last year
- The Triton TensorRT-LLM Backend☆910Updated last week
- Inference Vision Transformer (ViT) in plain C/C++ with ggml☆300Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆102Updated 7 years ago