flexflow / FlexFlow
FlexFlow Serve: Low-Latency, High-Performance LLM Serving
☆1,645Updated this week
Related projects: ⓘ
- FlashInfer: Kernel Library for LLM Serving☆1,138Updated this week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs…☆1,811Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,180Updated 2 months ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,333Updated 2 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,843Updated last week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆1,875Updated 5 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,830Updated 2 weeks ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,292Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,099Updated 7 months ago
- Serving multiple LoRA finetuned LLM as one☆946Updated 4 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,519Updated 7 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆560Updated 2 weeks ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,698Updated 7 months ago
- ☆1,164Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,206Updated 2 months ago
- Pipeline Parallelism for PyTorch☆708Updated 3 weeks ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,314Updated 5 months ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆694Updated 3 weeks ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,624Updated this week
- The Triton TensorRT-LLM Backend☆654Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆470Updated this week
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆994Updated 5 months ago
- Official Implementation of EAGLE-1 and EAGLE-2☆747Updated 3 weeks ago
- Microsoft Automatic Mixed Precision Library☆505Updated 3 weeks ago
- 📖A curated list of Awesome LLM Inference Paper with codes, TensorRT-LLM, vLLM, streaming-llm, AWQ, SmoothQuant, WINT8/4, Continuous Batc…☆2,475Updated this week
- A PyTorch Native LLM Training Framework☆575Updated 3 weeks ago
- Minimalistic large language model 3D-parallelism training☆1,111Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆451Updated 6 months ago
- PyTorch extensions for high performance and large scale training.☆3,149Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆5,773Updated 5 months ago