HabanaAI / vllm-fork
A high-throughput and memory-efficient inference and serving engine for LLMs
☆41Updated this week
Related projects ⓘ
Alternatives and complementary repositories for vllm-fork
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inference☆112Updated 8 months ago
- Applied AI experiments and examples for PyTorch☆159Updated last week
- ☆109Updated 7 months ago
- Easy and Efficient Quantization for Transformers☆178Updated 3 months ago
- Materials for learning SGLang☆75Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆43Updated this week
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆196Updated last week
- A low-latency & high-throughput serving engine for LLMs☆231Updated last month
- ☆162Updated 3 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆222Updated this week
- Latency and Memory Analysis of Transformer Models for Training and Inference☆352Updated 5 months ago
- A Python library transfers PyTorch tensors between CPU and NVMe☆96Updated this week
- ☆189Updated this week
- Flash-LLM: Enabling Cost-Effective and Highly-Efficient Large Generative Model Inference with Unstructured Sparsity☆177Updated last year
- ☆55Updated 5 months ago
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆196Updated 2 months ago
- 🚀 Collection of components for development, training, tuning, and inference of foundation models leveraging PyTorch native components.☆163Updated this week
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆302Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆51Updated 2 months ago
- Intel® Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Note…☆57Updated 2 months ago
- Zero Bubble Pipeline Parallelism☆279Updated this week
- ☆79Updated 2 months ago
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆277Updated 4 months ago
- Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)☆152Updated this week
- The official implementation of the EMNLP 2023 paper LLM-FP4☆166Updated 10 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Ray☆101Updated last week
- ☆88Updated 2 months ago
- Fast Matrix Multiplications for Lookup Table-Quantized LLMs☆183Updated last month
- Large Language Model Text Generation Inference on Habana Gaudi☆26Updated this week
- ☆43Updated this week