huggingface / tgi-gaudiLinks
Large Language Model Text Generation Inference on Habana Gaudi
โ34Updated 7 months ago
Alternatives and similar repositories for tgi-gaudi
Users that are interested in tgi-gaudi are comparing it to the libraries listed below
Sorting:
- Easy and lightning fast training of ๐ค Transformers on Habana Gaudi processor (HPU)โ201Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMsโ85Updated last week
- ๐๏ธ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of Oโฆโ317Updated last month
- A high-throughput and memory-efficient inference and serving engine for LLMsโ267Updated last year
- Benchmark suite for LLMs from Fireworks.aiโ83Updated this week
- Reference models for Intel(R) Gaudi(R) AI Acceleratorโ167Updated last month
- Intelยฎ Extension for DeepSpeed* is an extension to DeepSpeed that brings feature support with SYCL kernels on Intel GPU(XPU) device. Noteโฆโ63Updated 4 months ago
- Easy and Efficient Quantization for Transformersโ202Updated 4 months ago
- Pretrain, finetune and serve LLMs on Intel platforms with Rayโ131Updated last month
- โ56Updated last year
- A unified library for building, evaluating, and storing speculative decoding algorithms for LLM inference in vLLMโ70Updated this week
- โ312Updated this week
- ArcticInference: vLLM plugin for high-throughput, low-latency inferenceโ300Updated this week
- IBM development fork of https://github.com/huggingface/text-generation-inferenceโ62Updated 2 months ago
- โ218Updated 9 months ago
- Intel Gaudi's Megatron DeepSpeed Large Language Models for trainingโ15Updated 11 months ago
- Tutorials for running models on First-gen Gaudi and Gaudi2 for Training and Inference. The source files for the tutorials on https://devโฆโ62Updated 2 months ago
- vLLM performance dashboardโ37Updated last year
- An innovative library for efficient LLM inference via low-bit quantizationโ349Updated last year
- โ71Updated 7 months ago
- QUICK: Quantization-aware Interleaving and Conflict-free Kernel for efficient LLM inferenceโ118Updated last year
- The Triton backend for the ONNX Runtime.โ166Updated last week
- TPU inference for vLLM, with unified JAX and PyTorch support.โ161Updated this week
- โ57Updated last year
- โ118Updated this week
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.โ102Updated last year
- A tool to configure, launch and manage your machine learning experiments.โ205Updated this week
- OpenAI compatible API for TensorRT LLM triton backendโ217Updated last year
- Google TPU optimizations for transformers modelsโ122Updated 9 months ago
- โ122Updated last year