huggingface / tgi-gaudiLinks
Large Language Model Text Generation Inference on Habana Gaudi
โ34Updated 9 months ago
Alternatives and similar repositories for tgi-gaudi
Users that are interested in tgi-gaudi are comparing it to the libraries listed below
Sorting:
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ203Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ85Updated last week
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ325Updated 3 months ago
- Benchmark suite for LLMs from Fireworks.aiโ84Updated last month
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ169Updated 3 months ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ63Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ267Updated 3 weeks ago
- Easy and Efficient Quantization for Transformersโ202Updated 6 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ131Updated 3 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inferenceโ354Updated this week
- An innovative library for efficient LLM inference via low-bit quantizationโ351Updated last year
- Intel Gaudi's Megatron DeepSpeed Large Language Models for trainingโ16Updated last year
- โ71Updated 9 months ago
- โ219Updated 11 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLMโ174Updated last week
- โ322Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.โ14Updated 3 months ago
- Module, Model, and Tensor Serialization/Deserializationโ280Updated 4 months ago
- TPU inference for vLLM, with unified JAX and PyTorch support.โ202Updated this week
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welโฆโ397Updated 6 months ago
- โ56Updated last year
- vLLM adapter for a TGIS-compatible gRPC server.โ46Updated this week
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ522Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inferenceโ62Updated 3 months ago
- Genai-bench is a powerful benchmark tool designed for comprehensive token-level performance evaluation of large language model (LLM) servโฆโ250Updated 2 weeks ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inferenceโ118Updated last year
- โ60Updated last year
- A safetensors extension to efficiently store sparse quantized tensors on diskโ225Updated last week
- Fast and memory-efficient exact attentionโ105Updated last week
- KV cache compression for high-throughput LLM inferenceโ148Updated 10 months ago