SqueezeBits / Torch-TRTLLMLinks
Ditto is an open-source framework that enables direct conversion of HuggingFace PreTrainedModels into TensorRT-LLM engines.
☆52Updated 5 months ago
Alternatives and similar repositories for Torch-TRTLLM
Users that are interested in Torch-TRTLLM are comparing it to the libraries listed below
Sorting:
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆118Updated last year
- Easy and Efficient Quantization for Transformers☆203Updated 5 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLM☆149Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆86Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Updated last week
- ☆27Updated last year
- Efficient LLM Inference over Long Sequences☆393Updated 5 months ago
- ☆71Updated 8 months ago
- A performance library for machine learning applications.☆185Updated 2 years ago
- ☆205Updated 7 months ago
- The official implementation of the EMNLP 2023 paper LLM-FP4☆218Updated 2 years ago
- Fast low-bit matmul kernels in Triton☆407Updated 3 weeks ago
- TPU inference for vLLM, with unified JAX and PyTorch support.☆192Updated this week
- Study Group of Deep Learning Compiler☆165Updated 2 years ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆151Updated 3 months ago
- This repository contains the training code of ParetoQ introduced in our work "ParetoQ Scaling Laws in Extremely Low-bit LLM Quantization"☆115Updated 2 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆214Updated this week
- ring-attention experiments☆160Updated last year
- Applied AI experiments and examples for PyTorch☆309Updated 3 months ago
- llama3.cuda is a pure C/CUDA implementation for Llama 3 model.☆349Updated 7 months ago
- Pipeline parallelism for the minimalist☆37Updated 4 months ago
- [ACL 2025 Main] EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆317Updated 2 weeks ago
- A block oriented training approach for inference time optimization.☆33Updated last year
- Quantize transformers to any learned arbitrary 4-bit numeric format☆50Updated 5 months ago
- ☆114Updated 6 months ago
- ☆319Updated last week
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last month
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆302Updated this week
- Official implementation for Training LLMs with MXFP4☆112Updated 7 months ago
- a minimal cache manager for PagedAttention, on top of llama3.☆127Updated last year