huggingface / tgi-gaudiLinks
Large Language Model Text Generation Inference on Habana Gaudi
โ34Updated 5 months ago
Alternatives and similar repositories for tgi-gaudi
Users that are interested in tgi-gaudi are comparing it to the libraries listed below
Sorting:
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ194Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ83Updated this week
- Benchmark suite for LLMs from Fireworks.aiโ83Updated 2 weeks ago
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ167Updated 2 weeks ago
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ315Updated this week
- Easy and Efficient Quantization for Transformersโ203Updated 2 months ago
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ62Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMsโ266Updated 11 months ago
- IBM development fork of https://github.com/huggingface/text-generation-inferenceโ61Updated 4 months ago
- ArcticInference: vLLM plugin for high-throughput, low-latency inferenceโ235Updated last week
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ132Updated last week
- โ55Updated 9 months ago
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLMโ41Updated this week
- โ59Updated last year
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.โ511Updated 4 months ago
- โ241Updated last week
- โ74Updated 5 months ago
- JetStream is a throughput and memory optimized engine for LLM inference on XLA devices, starting with TPUs (and GPUs in future -- PRs welโฆโ375Updated 3 months ago
- An innovative library for efficient LLM inference via low-bit quantizationโ348Updated last year
- โ296Updated last week
- Module, Model, and Tensor Serialization/Deserializationโ264Updated 3 weeks ago
- A tool to configure, launch and manage your machine learning experiments.โ190Updated this week
- ๐ค Optimum Intel: Accelerate inference with Intel optimization toolsโ489Updated this week
- vLLM performance dashboardโ34Updated last year
- โ199Updated 4 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMsโ89Updated this week
- Google TPU optimizations for transformers modelsโ120Updated 7 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inferenceโ118Updated last year
- Inference server benchmarking toolโ100Updated 4 months ago
- Fast and memory-efficient exact attentionโ93Updated last week