flashinfer-ai / flashinfer
FlashInfer: Kernel Library for LLM Serving
☆2,078Updated this week
Alternatives and similar repositories for flashinfer:
Users that are interested in flashinfer are comparing it to the libraries listed below
- A throughput-oriented high-performance serving framework for LLMs☆737Updated 4 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆723Updated 5 months ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆2,182Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,192Updated 4 months ago
- Tile primitives for speedy kernels☆2,042Updated this week
- The Triton TensorRT-LLM Backend☆779Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆977Updated this week
- Official Implementation of EAGLE-1 (ICML'24) and EAGLE-2 (EMNLP'24)☆957Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,339Updated 7 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,743Updated last week
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆913Updated last week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,946Updated last month
- A PyTorch Native LLM Training Framework☆730Updated last month
- Minimalistic large language model 3D-parallelism training☆1,457Updated this week
- Mirage: Automatically Generating Fast GPU Kernels without Programming in Triton/CUDA☆743Updated last week
- QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving☆496Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,868Updated this week
- Fast, Flexible and Portable Structured Generation☆704Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,426Updated 7 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,965Updated last week
- Serving multiple LoRA finetuned LLM as one☆1,028Updated 9 months ago
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,756Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,987Updated last week
- Analyze the inference of Large Language Models (LLMs). Analyze aspects like computation, storage, transmission, and hardware roofline mod…☆391Updated 5 months ago
- TensorRT Model Optimizer is a unified library of state-of-the-art model optimization techniques such as quantization, pruning, distillati…☆708Updated this week
- PyTorch native quantization and sparsity for training and inference☆1,842Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆520Updated this week
- Ring attention implementation with flash attention☆674Updated 2 months ago
- Scalable toolkit for efficient model alignment☆719Updated this week
- Pipeline Parallelism for PyTorch☆749Updated 5 months ago